2026-04-11 00:00:07.717417 | Job console starting 2026-04-11 00:00:07.741805 | Updating git repos 2026-04-11 00:00:08.036056 | Cloning repos into workspace 2026-04-11 00:00:08.305484 | Restoring repo states 2026-04-11 00:00:08.344307 | Merging changes 2026-04-11 00:00:08.344325 | Checking out repos 2026-04-11 00:00:08.720749 | Preparing playbooks 2026-04-11 00:00:09.764724 | Running Ansible setup 2026-04-11 00:00:16.965957 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-11 00:00:18.332263 | 2026-04-11 00:00:18.332393 | PLAY [Base pre] 2026-04-11 00:00:18.374489 | 2026-04-11 00:00:18.374608 | TASK [Setup log path fact] 2026-04-11 00:00:18.450633 | orchestrator | ok 2026-04-11 00:00:18.493302 | 2026-04-11 00:00:18.493444 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-11 00:00:18.553565 | orchestrator | ok 2026-04-11 00:00:18.571319 | 2026-04-11 00:00:18.571434 | TASK [emit-job-header : Print job information] 2026-04-11 00:00:18.670994 | # Job Information 2026-04-11 00:00:18.671247 | Ansible Version: 2.16.14 2026-04-11 00:00:18.671292 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-11 00:00:18.671333 | Pipeline: periodic-midnight 2026-04-11 00:00:18.671361 | Executor: 521e9411259a 2026-04-11 00:00:18.671382 | Triggered by: https://github.com/osism/testbed 2026-04-11 00:00:18.671402 | Event ID: cc93a31476bb4705a7fc1e8d570633f7 2026-04-11 00:00:18.688667 | 2026-04-11 00:00:18.688780 | LOOP [emit-job-header : Print node information] 2026-04-11 00:00:19.132640 | orchestrator | ok: 2026-04-11 00:00:19.133513 | orchestrator | # Node Information 2026-04-11 00:00:19.133583 | orchestrator | Inventory Hostname: orchestrator 2026-04-11 00:00:19.133613 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-11 00:00:19.133637 | orchestrator | Username: zuul-testbed01 2026-04-11 00:00:19.133658 | orchestrator | Distro: Debian 12.13 2026-04-11 00:00:19.133683 | orchestrator | Provider: static-testbed 2026-04-11 00:00:19.133704 | orchestrator | Region: 2026-04-11 00:00:19.133727 | orchestrator | Label: testbed-orchestrator 2026-04-11 00:00:19.133747 | orchestrator | Product Name: OpenStack Nova 2026-04-11 00:00:19.133767 | orchestrator | Interface IP: 81.163.193.140 2026-04-11 00:00:19.153916 | 2026-04-11 00:00:19.154024 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-11 00:00:21.051501 | orchestrator -> localhost | changed 2026-04-11 00:00:21.057852 | 2026-04-11 00:00:21.057943 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-11 00:00:22.990863 | orchestrator -> localhost | changed 2026-04-11 00:00:23.009969 | 2026-04-11 00:00:23.010071 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-11 00:00:23.956885 | orchestrator -> localhost | ok 2026-04-11 00:00:23.962385 | 2026-04-11 00:00:23.962476 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-11 00:00:23.999817 | orchestrator | ok 2026-04-11 00:00:24.044729 | orchestrator | included: /var/lib/zuul/builds/0543a1f4268f47198bb837b73a2a155d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-11 00:00:24.071458 | 2026-04-11 00:00:24.071559 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-11 00:00:27.084455 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-11 00:00:27.084638 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0543a1f4268f47198bb837b73a2a155d/work/0543a1f4268f47198bb837b73a2a155d_id_rsa 2026-04-11 00:00:27.084675 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0543a1f4268f47198bb837b73a2a155d/work/0543a1f4268f47198bb837b73a2a155d_id_rsa.pub 2026-04-11 00:00:27.084701 | orchestrator -> localhost | The key fingerprint is: 2026-04-11 00:00:27.084724 | orchestrator -> localhost | SHA256:DlvQlDdG99WUC/gYywc+aIzN+FRA18RATcmoo7Xjow0 zuul-build-sshkey 2026-04-11 00:00:27.084746 | orchestrator -> localhost | The key's randomart image is: 2026-04-11 00:00:27.084778 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-11 00:00:27.084800 | orchestrator -> localhost | | o=++%o..=| 2026-04-11 00:00:27.084821 | orchestrator -> localhost | | o. =B O o.| 2026-04-11 00:00:27.084841 | orchestrator -> localhost | | . Bo*.* o .| 2026-04-11 00:00:27.084861 | orchestrator -> localhost | | + @ * o . | 2026-04-11 00:00:27.084880 | orchestrator -> localhost | | . S o o | 2026-04-11 00:00:27.084903 | orchestrator -> localhost | | * + | 2026-04-11 00:00:27.084923 | orchestrator -> localhost | | .Eo . | 2026-04-11 00:00:27.084988 | orchestrator -> localhost | | oo | 2026-04-11 00:00:27.085013 | orchestrator -> localhost | | .... | 2026-04-11 00:00:27.085034 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-11 00:00:27.085088 | orchestrator -> localhost | ok: Runtime: 0:00:02.100753 2026-04-11 00:00:27.114714 | 2026-04-11 00:00:27.114813 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-11 00:00:27.173992 | orchestrator | ok 2026-04-11 00:00:27.195495 | orchestrator | included: /var/lib/zuul/builds/0543a1f4268f47198bb837b73a2a155d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-11 00:00:27.238325 | 2026-04-11 00:00:27.238442 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-11 00:00:27.271314 | orchestrator | skipping: Conditional result was False 2026-04-11 00:00:27.277892 | 2026-04-11 00:00:27.277981 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-11 00:00:28.285305 | orchestrator | changed 2026-04-11 00:00:28.297219 | 2026-04-11 00:00:28.297312 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-11 00:00:28.578451 | orchestrator | ok 2026-04-11 00:00:28.583770 | 2026-04-11 00:00:28.583853 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-11 00:00:29.117243 | orchestrator | ok 2026-04-11 00:00:29.129698 | 2026-04-11 00:00:29.129839 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-11 00:00:29.723809 | orchestrator | ok 2026-04-11 00:00:29.732345 | 2026-04-11 00:00:29.732427 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-11 00:00:29.781179 | orchestrator | skipping: Conditional result was False 2026-04-11 00:00:29.787488 | 2026-04-11 00:00:29.787570 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-11 00:00:31.531435 | orchestrator -> localhost | changed 2026-04-11 00:00:31.549523 | 2026-04-11 00:00:31.549620 | TASK [add-build-sshkey : Add back temp key] 2026-04-11 00:00:32.517273 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0543a1f4268f47198bb837b73a2a155d/work/0543a1f4268f47198bb837b73a2a155d_id_rsa (zuul-build-sshkey) 2026-04-11 00:00:32.517462 | orchestrator -> localhost | ok: Runtime: 0:00:00.028291 2026-04-11 00:00:32.523788 | 2026-04-11 00:00:32.523876 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-11 00:00:33.212325 | orchestrator | ok 2026-04-11 00:00:33.221384 | 2026-04-11 00:00:33.221467 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-11 00:00:33.264623 | orchestrator | skipping: Conditional result was False 2026-04-11 00:00:33.379284 | 2026-04-11 00:00:33.379382 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-11 00:00:34.025057 | orchestrator | ok 2026-04-11 00:00:34.052967 | 2026-04-11 00:00:34.053068 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-11 00:00:34.101212 | orchestrator | ok 2026-04-11 00:00:34.115165 | 2026-04-11 00:00:34.115258 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-11 00:00:34.982802 | orchestrator -> localhost | ok 2026-04-11 00:00:34.988636 | 2026-04-11 00:00:34.988720 | TASK [validate-host : Collect information about the host] 2026-04-11 00:00:36.445641 | orchestrator | ok 2026-04-11 00:00:36.478749 | 2026-04-11 00:00:36.478881 | TASK [validate-host : Sanitize hostname] 2026-04-11 00:00:36.557988 | orchestrator | ok 2026-04-11 00:00:36.562383 | 2026-04-11 00:00:36.562466 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-11 00:00:37.426897 | orchestrator -> localhost | changed 2026-04-11 00:00:37.432293 | 2026-04-11 00:00:37.432374 | TASK [validate-host : Collect information about zuul worker] 2026-04-11 00:00:38.041248 | orchestrator | ok 2026-04-11 00:00:38.045615 | 2026-04-11 00:00:38.045697 | TASK [validate-host : Write out all zuul information for each host] 2026-04-11 00:00:39.622132 | orchestrator -> localhost | changed 2026-04-11 00:00:39.630515 | 2026-04-11 00:00:39.630600 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-11 00:00:39.987957 | orchestrator | ok 2026-04-11 00:00:40.010294 | 2026-04-11 00:00:40.010389 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-11 00:02:10.160039 | orchestrator | changed: 2026-04-11 00:02:10.161586 | orchestrator | .d..t...... src/ 2026-04-11 00:02:10.161660 | orchestrator | .d..t...... src/github.com/ 2026-04-11 00:02:10.161694 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-11 00:02:10.161723 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-11 00:02:10.161751 | orchestrator | RedHat.yml 2026-04-11 00:02:10.178393 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-11 00:02:10.178411 | orchestrator | RedHat.yml 2026-04-11 00:02:10.178463 | orchestrator | = 1.53.0"... 2026-04-11 00:02:24.904293 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-11 00:02:24.919871 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-11 00:02:25.054535 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-11 00:02:25.880538 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-11 00:02:25.942374 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-11 00:02:26.584835 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-11 00:02:26.646508 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-11 00:02:27.250525 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-11 00:02:27.250590 | orchestrator | 2026-04-11 00:02:27.250596 | orchestrator | Providers are signed by their developers. 2026-04-11 00:02:27.250601 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-11 00:02:27.250613 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-11 00:02:27.250650 | orchestrator | 2026-04-11 00:02:27.250656 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-11 00:02:27.250660 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-11 00:02:27.250681 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-11 00:02:27.250692 | orchestrator | you run "tofu init" in the future. 2026-04-11 00:02:27.251080 | orchestrator | 2026-04-11 00:02:27.251120 | orchestrator | OpenTofu has been successfully initialized! 2026-04-11 00:02:27.251143 | orchestrator | 2026-04-11 00:02:27.251148 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-11 00:02:27.251153 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-11 00:02:27.251157 | orchestrator | should now work. 2026-04-11 00:02:27.251161 | orchestrator | 2026-04-11 00:02:27.251165 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-11 00:02:27.251169 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-11 00:02:27.251180 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-11 00:02:27.472427 | orchestrator | Created and switched to workspace "ci"! 2026-04-11 00:02:27.472485 | orchestrator | 2026-04-11 00:02:27.472492 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-11 00:02:27.472498 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-11 00:02:27.472503 | orchestrator | for this configuration. 2026-04-11 00:02:28.089991 | orchestrator | ci.auto.tfvars 2026-04-11 00:02:28.110109 | orchestrator | default_custom.tf 2026-04-11 00:02:29.667410 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-11 00:02:30.242632 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-11 00:02:30.582769 | orchestrator | 2026-04-11 00:02:30.582854 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-11 00:02:30.582863 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-11 00:02:30.582891 | orchestrator | + create 2026-04-11 00:02:30.582905 | orchestrator | <= read (data resources) 2026-04-11 00:02:30.582911 | orchestrator | 2026-04-11 00:02:30.582915 | orchestrator | OpenTofu will perform the following actions: 2026-04-11 00:02:30.583799 | orchestrator | 2026-04-11 00:02:30.583811 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-11 00:02:30.583816 | orchestrator | # (config refers to values not yet known) 2026-04-11 00:02:30.583820 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-11 00:02:30.583839 | orchestrator | + checksum = (known after apply) 2026-04-11 00:02:30.583844 | orchestrator | + created_at = (known after apply) 2026-04-11 00:02:30.583848 | orchestrator | + file = (known after apply) 2026-04-11 00:02:30.583852 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.583876 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.583880 | orchestrator | + min_disk_gb = (known after apply) 2026-04-11 00:02:30.583884 | orchestrator | + min_ram_mb = (known after apply) 2026-04-11 00:02:30.583888 | orchestrator | + most_recent = true 2026-04-11 00:02:30.583893 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.583897 | orchestrator | + protected = (known after apply) 2026-04-11 00:02:30.583901 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.583908 | orchestrator | + schema = (known after apply) 2026-04-11 00:02:30.583912 | orchestrator | + size_bytes = (known after apply) 2026-04-11 00:02:30.583916 | orchestrator | + tags = (known after apply) 2026-04-11 00:02:30.583919 | orchestrator | + updated_at = (known after apply) 2026-04-11 00:02:30.583923 | orchestrator | } 2026-04-11 00:02:30.583928 | orchestrator | 2026-04-11 00:02:30.583932 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-11 00:02:30.583936 | orchestrator | # (config refers to values not yet known) 2026-04-11 00:02:30.583940 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-11 00:02:30.583944 | orchestrator | + checksum = (known after apply) 2026-04-11 00:02:30.583948 | orchestrator | + created_at = (known after apply) 2026-04-11 00:02:30.583952 | orchestrator | + file = (known after apply) 2026-04-11 00:02:30.583955 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.583959 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.583963 | orchestrator | + min_disk_gb = (known after apply) 2026-04-11 00:02:30.583967 | orchestrator | + min_ram_mb = (known after apply) 2026-04-11 00:02:30.583970 | orchestrator | + most_recent = true 2026-04-11 00:02:30.583974 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.583978 | orchestrator | + protected = (known after apply) 2026-04-11 00:02:30.583982 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.583985 | orchestrator | + schema = (known after apply) 2026-04-11 00:02:30.583989 | orchestrator | + size_bytes = (known after apply) 2026-04-11 00:02:30.583993 | orchestrator | + tags = (known after apply) 2026-04-11 00:02:30.583997 | orchestrator | + updated_at = (known after apply) 2026-04-11 00:02:30.584000 | orchestrator | } 2026-04-11 00:02:30.584004 | orchestrator | 2026-04-11 00:02:30.584008 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-11 00:02:30.584012 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-11 00:02:30.584016 | orchestrator | + content = (known after apply) 2026-04-11 00:02:30.584020 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-11 00:02:30.584024 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-11 00:02:30.584027 | orchestrator | + content_md5 = (known after apply) 2026-04-11 00:02:30.584031 | orchestrator | + content_sha1 = (known after apply) 2026-04-11 00:02:30.584035 | orchestrator | + content_sha256 = (known after apply) 2026-04-11 00:02:30.584039 | orchestrator | + content_sha512 = (known after apply) 2026-04-11 00:02:30.584043 | orchestrator | + directory_permission = "0777" 2026-04-11 00:02:30.584046 | orchestrator | + file_permission = "0644" 2026-04-11 00:02:30.584050 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-11 00:02:30.584054 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.584058 | orchestrator | } 2026-04-11 00:02:30.584061 | orchestrator | 2026-04-11 00:02:30.584065 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-11 00:02:30.584069 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-11 00:02:30.584073 | orchestrator | + content = (known after apply) 2026-04-11 00:02:30.584077 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-11 00:02:30.584080 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-11 00:02:30.584084 | orchestrator | + content_md5 = (known after apply) 2026-04-11 00:02:30.584088 | orchestrator | + content_sha1 = (known after apply) 2026-04-11 00:02:30.584092 | orchestrator | + content_sha256 = (known after apply) 2026-04-11 00:02:30.584095 | orchestrator | + content_sha512 = (known after apply) 2026-04-11 00:02:30.584099 | orchestrator | + directory_permission = "0777" 2026-04-11 00:02:30.584103 | orchestrator | + file_permission = "0644" 2026-04-11 00:02:30.584111 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-11 00:02:30.584114 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.584118 | orchestrator | } 2026-04-11 00:02:30.584125 | orchestrator | 2026-04-11 00:02:30.584135 | orchestrator | # local_file.inventory will be created 2026-04-11 00:02:30.584139 | orchestrator | + resource "local_file" "inventory" { 2026-04-11 00:02:30.584143 | orchestrator | + content = (known after apply) 2026-04-11 00:02:30.584147 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-11 00:02:30.584150 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-11 00:02:30.584154 | orchestrator | + content_md5 = (known after apply) 2026-04-11 00:02:30.584158 | orchestrator | + content_sha1 = (known after apply) 2026-04-11 00:02:30.584162 | orchestrator | + content_sha256 = (known after apply) 2026-04-11 00:02:30.584166 | orchestrator | + content_sha512 = (known after apply) 2026-04-11 00:02:30.584169 | orchestrator | + directory_permission = "0777" 2026-04-11 00:02:30.584173 | orchestrator | + file_permission = "0644" 2026-04-11 00:02:30.584177 | orchestrator | + filename = "inventory.ci" 2026-04-11 00:02:30.584181 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.584185 | orchestrator | } 2026-04-11 00:02:30.584188 | orchestrator | 2026-04-11 00:02:30.584192 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-11 00:02:30.584196 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-11 00:02:30.584200 | orchestrator | + content = (sensitive value) 2026-04-11 00:02:30.584204 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-11 00:02:30.584208 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-11 00:02:30.584211 | orchestrator | + content_md5 = (known after apply) 2026-04-11 00:02:30.584215 | orchestrator | + content_sha1 = (known after apply) 2026-04-11 00:02:30.584219 | orchestrator | + content_sha256 = (known after apply) 2026-04-11 00:02:30.584223 | orchestrator | + content_sha512 = (known after apply) 2026-04-11 00:02:30.584227 | orchestrator | + directory_permission = "0700" 2026-04-11 00:02:30.584230 | orchestrator | + file_permission = "0600" 2026-04-11 00:02:30.584234 | orchestrator | + filename = ".id_rsa.ci" 2026-04-11 00:02:30.584238 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.584242 | orchestrator | } 2026-04-11 00:02:30.584247 | orchestrator | 2026-04-11 00:02:30.584251 | orchestrator | # null_resource.node_semaphore will be created 2026-04-11 00:02:30.584255 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-11 00:02:30.584258 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.584262 | orchestrator | } 2026-04-11 00:02:30.586291 | orchestrator | 2026-04-11 00:02:30.586320 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-11 00:02:30.586325 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-11 00:02:30.586329 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586333 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586337 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586341 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.586345 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586349 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-11 00:02:30.586353 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586357 | orchestrator | + size = 80 2026-04-11 00:02:30.586361 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586365 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586369 | orchestrator | } 2026-04-11 00:02:30.586372 | orchestrator | 2026-04-11 00:02:30.586376 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-11 00:02:30.586381 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 00:02:30.586385 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586389 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586392 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586405 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.586409 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586413 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-11 00:02:30.586417 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586420 | orchestrator | + size = 80 2026-04-11 00:02:30.586424 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586428 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586432 | orchestrator | } 2026-04-11 00:02:30.586436 | orchestrator | 2026-04-11 00:02:30.586439 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-11 00:02:30.586443 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 00:02:30.586447 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586451 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586454 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586458 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.586462 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586466 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-11 00:02:30.586470 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586473 | orchestrator | + size = 80 2026-04-11 00:02:30.586477 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586481 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586485 | orchestrator | } 2026-04-11 00:02:30.586488 | orchestrator | 2026-04-11 00:02:30.586492 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-11 00:02:30.586496 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 00:02:30.586500 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586504 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586508 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586511 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.586515 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586519 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-11 00:02:30.586523 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586526 | orchestrator | + size = 80 2026-04-11 00:02:30.586530 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586534 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586538 | orchestrator | } 2026-04-11 00:02:30.586542 | orchestrator | 2026-04-11 00:02:30.586545 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-11 00:02:30.586549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 00:02:30.586553 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586557 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586561 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586564 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.586568 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586575 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-11 00:02:30.586579 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586583 | orchestrator | + size = 80 2026-04-11 00:02:30.586586 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586590 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586594 | orchestrator | } 2026-04-11 00:02:30.586598 | orchestrator | 2026-04-11 00:02:30.586602 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-11 00:02:30.586606 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 00:02:30.586609 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586613 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586617 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586624 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.586628 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586632 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-11 00:02:30.586636 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586639 | orchestrator | + size = 80 2026-04-11 00:02:30.586643 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586647 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586651 | orchestrator | } 2026-04-11 00:02:30.586654 | orchestrator | 2026-04-11 00:02:30.586658 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-11 00:02:30.586662 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 00:02:30.586666 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586670 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586673 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586677 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.586681 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586692 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-11 00:02:30.586696 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586699 | orchestrator | + size = 80 2026-04-11 00:02:30.586703 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586707 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586711 | orchestrator | } 2026-04-11 00:02:30.586714 | orchestrator | 2026-04-11 00:02:30.586718 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-11 00:02:30.586723 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.586727 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586731 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586735 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586738 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586743 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-11 00:02:30.586746 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586750 | orchestrator | + size = 20 2026-04-11 00:02:30.586754 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586758 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586761 | orchestrator | } 2026-04-11 00:02:30.586765 | orchestrator | 2026-04-11 00:02:30.586769 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-11 00:02:30.586773 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.586776 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586780 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586784 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586788 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586792 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-11 00:02:30.586795 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586799 | orchestrator | + size = 20 2026-04-11 00:02:30.586803 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586807 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586811 | orchestrator | } 2026-04-11 00:02:30.586815 | orchestrator | 2026-04-11 00:02:30.586818 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-11 00:02:30.586852 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.586857 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586861 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586864 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586868 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586872 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-11 00:02:30.586876 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586885 | orchestrator | + size = 20 2026-04-11 00:02:30.586889 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586893 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586897 | orchestrator | } 2026-04-11 00:02:30.586900 | orchestrator | 2026-04-11 00:02:30.586904 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-11 00:02:30.586908 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.586912 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586916 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586919 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586923 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586927 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-11 00:02:30.586931 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586934 | orchestrator | + size = 20 2026-04-11 00:02:30.586938 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586942 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586946 | orchestrator | } 2026-04-11 00:02:30.586950 | orchestrator | 2026-04-11 00:02:30.586953 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-11 00:02:30.586957 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.586961 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.586965 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.586969 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.586972 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.586976 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-11 00:02:30.586980 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.586986 | orchestrator | + size = 20 2026-04-11 00:02:30.586990 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.586994 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.586998 | orchestrator | } 2026-04-11 00:02:30.587002 | orchestrator | 2026-04-11 00:02:30.587006 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-11 00:02:30.587009 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.587013 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.587017 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.587021 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.587024 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.587028 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-11 00:02:30.587032 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.587036 | orchestrator | + size = 20 2026-04-11 00:02:30.587040 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.587043 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.587047 | orchestrator | } 2026-04-11 00:02:30.587051 | orchestrator | 2026-04-11 00:02:30.587055 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-11 00:02:30.587059 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.587062 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.587066 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.587070 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.587074 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.587078 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-11 00:02:30.587081 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.587085 | orchestrator | + size = 20 2026-04-11 00:02:30.587089 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.587093 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.587097 | orchestrator | } 2026-04-11 00:02:30.587100 | orchestrator | 2026-04-11 00:02:30.587107 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-11 00:02:30.587111 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.587118 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.587122 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.587126 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.587130 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.587134 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-11 00:02:30.587138 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.587141 | orchestrator | + size = 20 2026-04-11 00:02:30.587145 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.587149 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.587153 | orchestrator | } 2026-04-11 00:02:30.587157 | orchestrator | 2026-04-11 00:02:30.587160 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-11 00:02:30.587164 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 00:02:30.587168 | orchestrator | + attachment = (known after apply) 2026-04-11 00:02:30.587172 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.587176 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.587179 | orchestrator | + metadata = (known after apply) 2026-04-11 00:02:30.587183 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-11 00:02:30.587187 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.587191 | orchestrator | + size = 20 2026-04-11 00:02:30.587194 | orchestrator | + volume_retype_policy = "never" 2026-04-11 00:02:30.587198 | orchestrator | + volume_type = "ssd" 2026-04-11 00:02:30.587202 | orchestrator | } 2026-04-11 00:02:30.587206 | orchestrator | 2026-04-11 00:02:30.587210 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-11 00:02:30.587214 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-11 00:02:30.587217 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 00:02:30.587221 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 00:02:30.587225 | orchestrator | + all_metadata = (known after apply) 2026-04-11 00:02:30.587229 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.587233 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.587236 | orchestrator | + config_drive = true 2026-04-11 00:02:30.587240 | orchestrator | + created = (known after apply) 2026-04-11 00:02:30.587244 | orchestrator | + flavor_id = (known after apply) 2026-04-11 00:02:30.587248 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-11 00:02:30.587251 | orchestrator | + force_delete = false 2026-04-11 00:02:30.587255 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 00:02:30.587259 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.587263 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.587266 | orchestrator | + image_name = (known after apply) 2026-04-11 00:02:30.587270 | orchestrator | + key_pair = "testbed" 2026-04-11 00:02:30.587274 | orchestrator | + name = "testbed-manager" 2026-04-11 00:02:30.587278 | orchestrator | + power_state = "active" 2026-04-11 00:02:30.587282 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.587285 | orchestrator | + security_groups = (known after apply) 2026-04-11 00:02:30.587289 | orchestrator | + stop_before_destroy = false 2026-04-11 00:02:30.587293 | orchestrator | + updated = (known after apply) 2026-04-11 00:02:30.587297 | orchestrator | + user_data = (sensitive value) 2026-04-11 00:02:30.587300 | orchestrator | 2026-04-11 00:02:30.587304 | orchestrator | + block_device { 2026-04-11 00:02:30.587308 | orchestrator | + boot_index = 0 2026-04-11 00:02:30.587312 | orchestrator | + delete_on_termination = false 2026-04-11 00:02:30.587318 | orchestrator | + destination_type = "volume" 2026-04-11 00:02:30.587322 | orchestrator | + multiattach = false 2026-04-11 00:02:30.587326 | orchestrator | + source_type = "volume" 2026-04-11 00:02:30.587330 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.587337 | orchestrator | } 2026-04-11 00:02:30.587340 | orchestrator | 2026-04-11 00:02:30.587344 | orchestrator | + network { 2026-04-11 00:02:30.587348 | orchestrator | + access_network = false 2026-04-11 00:02:30.587352 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 00:02:30.587356 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 00:02:30.587360 | orchestrator | + mac = (known after apply) 2026-04-11 00:02:30.587363 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.587367 | orchestrator | + port = (known after apply) 2026-04-11 00:02:30.587371 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.587375 | orchestrator | } 2026-04-11 00:02:30.587378 | orchestrator | } 2026-04-11 00:02:30.587382 | orchestrator | 2026-04-11 00:02:30.587386 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-11 00:02:30.587390 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 00:02:30.587394 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 00:02:30.587397 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 00:02:30.587401 | orchestrator | + all_metadata = (known after apply) 2026-04-11 00:02:30.587405 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.587409 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.587412 | orchestrator | + config_drive = true 2026-04-11 00:02:30.587416 | orchestrator | + created = (known after apply) 2026-04-11 00:02:30.587420 | orchestrator | + flavor_id = (known after apply) 2026-04-11 00:02:30.587424 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 00:02:30.587427 | orchestrator | + force_delete = false 2026-04-11 00:02:30.587431 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 00:02:30.587435 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.587439 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.587442 | orchestrator | + image_name = (known after apply) 2026-04-11 00:02:30.587446 | orchestrator | + key_pair = "testbed" 2026-04-11 00:02:30.587450 | orchestrator | + name = "testbed-node-0" 2026-04-11 00:02:30.587454 | orchestrator | + power_state = "active" 2026-04-11 00:02:30.587457 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.587461 | orchestrator | + security_groups = (known after apply) 2026-04-11 00:02:30.587465 | orchestrator | + stop_before_destroy = false 2026-04-11 00:02:30.587469 | orchestrator | + updated = (known after apply) 2026-04-11 00:02:30.587472 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 00:02:30.587476 | orchestrator | 2026-04-11 00:02:30.587480 | orchestrator | + block_device { 2026-04-11 00:02:30.587484 | orchestrator | + boot_index = 0 2026-04-11 00:02:30.587490 | orchestrator | + delete_on_termination = false 2026-04-11 00:02:30.587494 | orchestrator | + destination_type = "volume" 2026-04-11 00:02:30.587498 | orchestrator | + multiattach = false 2026-04-11 00:02:30.587501 | orchestrator | + source_type = "volume" 2026-04-11 00:02:30.587505 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.587509 | orchestrator | } 2026-04-11 00:02:30.587513 | orchestrator | 2026-04-11 00:02:30.587517 | orchestrator | + network { 2026-04-11 00:02:30.587520 | orchestrator | + access_network = false 2026-04-11 00:02:30.587524 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 00:02:30.587528 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 00:02:30.587532 | orchestrator | + mac = (known after apply) 2026-04-11 00:02:30.587536 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.587539 | orchestrator | + port = (known after apply) 2026-04-11 00:02:30.587543 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.587547 | orchestrator | } 2026-04-11 00:02:30.587551 | orchestrator | } 2026-04-11 00:02:30.587599 | orchestrator | 2026-04-11 00:02:30.587605 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-11 00:02:30.587686 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 00:02:30.587760 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 00:02:30.587770 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 00:02:30.587840 | orchestrator | + all_metadata = (known after apply) 2026-04-11 00:02:30.587844 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.587848 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.587852 | orchestrator | + config_drive = true 2026-04-11 00:02:30.587856 | orchestrator | + created = (known after apply) 2026-04-11 00:02:30.587859 | orchestrator | + flavor_id = (known after apply) 2026-04-11 00:02:30.587863 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 00:02:30.587867 | orchestrator | + force_delete = false 2026-04-11 00:02:30.587871 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 00:02:30.587874 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.587878 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.587882 | orchestrator | + image_name = (known after apply) 2026-04-11 00:02:30.587886 | orchestrator | + key_pair = "testbed" 2026-04-11 00:02:30.587889 | orchestrator | + name = "testbed-node-1" 2026-04-11 00:02:30.587893 | orchestrator | + power_state = "active" 2026-04-11 00:02:30.587897 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.587901 | orchestrator | + security_groups = (known after apply) 2026-04-11 00:02:30.587904 | orchestrator | + stop_before_destroy = false 2026-04-11 00:02:30.587908 | orchestrator | + updated = (known after apply) 2026-04-11 00:02:30.587912 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 00:02:30.587916 | orchestrator | 2026-04-11 00:02:30.587919 | orchestrator | + block_device { 2026-04-11 00:02:30.587923 | orchestrator | + boot_index = 0 2026-04-11 00:02:30.587927 | orchestrator | + delete_on_termination = false 2026-04-11 00:02:30.587931 | orchestrator | + destination_type = "volume" 2026-04-11 00:02:30.587935 | orchestrator | + multiattach = false 2026-04-11 00:02:30.587938 | orchestrator | + source_type = "volume" 2026-04-11 00:02:30.587942 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.587946 | orchestrator | } 2026-04-11 00:02:30.587950 | orchestrator | 2026-04-11 00:02:30.587954 | orchestrator | + network { 2026-04-11 00:02:30.587957 | orchestrator | + access_network = false 2026-04-11 00:02:30.587961 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 00:02:30.587965 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 00:02:30.587969 | orchestrator | + mac = (known after apply) 2026-04-11 00:02:30.587973 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.587976 | orchestrator | + port = (known after apply) 2026-04-11 00:02:30.587980 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.587984 | orchestrator | } 2026-04-11 00:02:30.587988 | orchestrator | } 2026-04-11 00:02:30.589331 | orchestrator | 2026-04-11 00:02:30.589353 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-11 00:02:30.589358 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 00:02:30.589362 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 00:02:30.589366 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 00:02:30.589371 | orchestrator | + all_metadata = (known after apply) 2026-04-11 00:02:30.589375 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.589385 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.589390 | orchestrator | + config_drive = true 2026-04-11 00:02:30.589394 | orchestrator | + created = (known after apply) 2026-04-11 00:02:30.589397 | orchestrator | + flavor_id = (known after apply) 2026-04-11 00:02:30.589401 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 00:02:30.589405 | orchestrator | + force_delete = false 2026-04-11 00:02:30.589409 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 00:02:30.589413 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.589417 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.589427 | orchestrator | + image_name = (known after apply) 2026-04-11 00:02:30.589430 | orchestrator | + key_pair = "testbed" 2026-04-11 00:02:30.589434 | orchestrator | + name = "testbed-node-2" 2026-04-11 00:02:30.589438 | orchestrator | + power_state = "active" 2026-04-11 00:02:30.589442 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.589445 | orchestrator | + security_groups = (known after apply) 2026-04-11 00:02:30.589449 | orchestrator | + stop_before_destroy = false 2026-04-11 00:02:30.589453 | orchestrator | + updated = (known after apply) 2026-04-11 00:02:30.589457 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 00:02:30.589461 | orchestrator | 2026-04-11 00:02:30.589464 | orchestrator | + block_device { 2026-04-11 00:02:30.589468 | orchestrator | + boot_index = 0 2026-04-11 00:02:30.589472 | orchestrator | + delete_on_termination = false 2026-04-11 00:02:30.589476 | orchestrator | + destination_type = "volume" 2026-04-11 00:02:30.589479 | orchestrator | + multiattach = false 2026-04-11 00:02:30.589483 | orchestrator | + source_type = "volume" 2026-04-11 00:02:30.589487 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.589491 | orchestrator | } 2026-04-11 00:02:30.589495 | orchestrator | 2026-04-11 00:02:30.589498 | orchestrator | + network { 2026-04-11 00:02:30.589502 | orchestrator | + access_network = false 2026-04-11 00:02:30.589506 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 00:02:30.589510 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 00:02:30.589513 | orchestrator | + mac = (known after apply) 2026-04-11 00:02:30.589517 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.589521 | orchestrator | + port = (known after apply) 2026-04-11 00:02:30.589524 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.589528 | orchestrator | } 2026-04-11 00:02:30.589532 | orchestrator | } 2026-04-11 00:02:30.589536 | orchestrator | 2026-04-11 00:02:30.589540 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-11 00:02:30.589543 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 00:02:30.589547 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 00:02:30.589551 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 00:02:30.589555 | orchestrator | + all_metadata = (known after apply) 2026-04-11 00:02:30.589558 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.589562 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.589566 | orchestrator | + config_drive = true 2026-04-11 00:02:30.589569 | orchestrator | + created = (known after apply) 2026-04-11 00:02:30.589573 | orchestrator | + flavor_id = (known after apply) 2026-04-11 00:02:30.589577 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 00:02:30.589581 | orchestrator | + force_delete = false 2026-04-11 00:02:30.589584 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 00:02:30.589588 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.589592 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.589596 | orchestrator | + image_name = (known after apply) 2026-04-11 00:02:30.589599 | orchestrator | + key_pair = "testbed" 2026-04-11 00:02:30.589603 | orchestrator | + name = "testbed-node-3" 2026-04-11 00:02:30.589607 | orchestrator | + power_state = "active" 2026-04-11 00:02:30.589610 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.589614 | orchestrator | + security_groups = (known after apply) 2026-04-11 00:02:30.589618 | orchestrator | + stop_before_destroy = false 2026-04-11 00:02:30.589622 | orchestrator | + updated = (known after apply) 2026-04-11 00:02:30.589625 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 00:02:30.589629 | orchestrator | 2026-04-11 00:02:30.589633 | orchestrator | + block_device { 2026-04-11 00:02:30.589639 | orchestrator | + boot_index = 0 2026-04-11 00:02:30.589643 | orchestrator | + delete_on_termination = false 2026-04-11 00:02:30.589647 | orchestrator | + destination_type = "volume" 2026-04-11 00:02:30.589653 | orchestrator | + multiattach = false 2026-04-11 00:02:30.589657 | orchestrator | + source_type = "volume" 2026-04-11 00:02:30.589661 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.589665 | orchestrator | } 2026-04-11 00:02:30.589669 | orchestrator | 2026-04-11 00:02:30.589672 | orchestrator | + network { 2026-04-11 00:02:30.589676 | orchestrator | + access_network = false 2026-04-11 00:02:30.589680 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 00:02:30.589684 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 00:02:30.589688 | orchestrator | + mac = (known after apply) 2026-04-11 00:02:30.589691 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.589695 | orchestrator | + port = (known after apply) 2026-04-11 00:02:30.589699 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.589703 | orchestrator | } 2026-04-11 00:02:30.589707 | orchestrator | } 2026-04-11 00:02:30.589714 | orchestrator | 2026-04-11 00:02:30.589718 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-11 00:02:30.589722 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 00:02:30.589726 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 00:02:30.589730 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 00:02:30.589734 | orchestrator | + all_metadata = (known after apply) 2026-04-11 00:02:30.589737 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.589741 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.589745 | orchestrator | + config_drive = true 2026-04-11 00:02:30.589749 | orchestrator | + created = (known after apply) 2026-04-11 00:02:30.589752 | orchestrator | + flavor_id = (known after apply) 2026-04-11 00:02:30.589756 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 00:02:30.589760 | orchestrator | + force_delete = false 2026-04-11 00:02:30.589764 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 00:02:30.589767 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.589771 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.589775 | orchestrator | + image_name = (known after apply) 2026-04-11 00:02:30.589779 | orchestrator | + key_pair = "testbed" 2026-04-11 00:02:30.589782 | orchestrator | + name = "testbed-node-4" 2026-04-11 00:02:30.589786 | orchestrator | + power_state = "active" 2026-04-11 00:02:30.589790 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.589794 | orchestrator | + security_groups = (known after apply) 2026-04-11 00:02:30.589797 | orchestrator | + stop_before_destroy = false 2026-04-11 00:02:30.589801 | orchestrator | + updated = (known after apply) 2026-04-11 00:02:30.589805 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 00:02:30.589809 | orchestrator | 2026-04-11 00:02:30.589812 | orchestrator | + block_device { 2026-04-11 00:02:30.589816 | orchestrator | + boot_index = 0 2026-04-11 00:02:30.589820 | orchestrator | + delete_on_termination = false 2026-04-11 00:02:30.589840 | orchestrator | + destination_type = "volume" 2026-04-11 00:02:30.589844 | orchestrator | + multiattach = false 2026-04-11 00:02:30.589848 | orchestrator | + source_type = "volume" 2026-04-11 00:02:30.589852 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.589856 | orchestrator | } 2026-04-11 00:02:30.589859 | orchestrator | 2026-04-11 00:02:30.589863 | orchestrator | + network { 2026-04-11 00:02:30.589867 | orchestrator | + access_network = false 2026-04-11 00:02:30.589871 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 00:02:30.589874 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 00:02:30.589878 | orchestrator | + mac = (known after apply) 2026-04-11 00:02:30.589882 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.589886 | orchestrator | + port = (known after apply) 2026-04-11 00:02:30.589890 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.589893 | orchestrator | } 2026-04-11 00:02:30.589897 | orchestrator | } 2026-04-11 00:02:30.589904 | orchestrator | 2026-04-11 00:02:30.589908 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-11 00:02:30.589912 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 00:02:30.589916 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 00:02:30.589920 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 00:02:30.589924 | orchestrator | + all_metadata = (known after apply) 2026-04-11 00:02:30.589927 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.589931 | orchestrator | + availability_zone = "nova" 2026-04-11 00:02:30.589935 | orchestrator | + config_drive = true 2026-04-11 00:02:30.589939 | orchestrator | + created = (known after apply) 2026-04-11 00:02:30.589942 | orchestrator | + flavor_id = (known after apply) 2026-04-11 00:02:30.589946 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 00:02:30.589950 | orchestrator | + force_delete = false 2026-04-11 00:02:30.589956 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 00:02:30.589960 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.589964 | orchestrator | + image_id = (known after apply) 2026-04-11 00:02:30.589968 | orchestrator | + image_name = (known after apply) 2026-04-11 00:02:30.589971 | orchestrator | + key_pair = "testbed" 2026-04-11 00:02:30.589975 | orchestrator | + name = "testbed-node-5" 2026-04-11 00:02:30.589979 | orchestrator | + power_state = "active" 2026-04-11 00:02:30.589983 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.589986 | orchestrator | + security_groups = (known after apply) 2026-04-11 00:02:30.589990 | orchestrator | + stop_before_destroy = false 2026-04-11 00:02:30.589994 | orchestrator | + updated = (known after apply) 2026-04-11 00:02:30.589998 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 00:02:30.590001 | orchestrator | 2026-04-11 00:02:30.590005 | orchestrator | + block_device { 2026-04-11 00:02:30.590009 | orchestrator | + boot_index = 0 2026-04-11 00:02:30.590032 | orchestrator | + delete_on_termination = false 2026-04-11 00:02:30.590037 | orchestrator | + destination_type = "volume" 2026-04-11 00:02:30.590040 | orchestrator | + multiattach = false 2026-04-11 00:02:30.590044 | orchestrator | + source_type = "volume" 2026-04-11 00:02:30.590048 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.590052 | orchestrator | } 2026-04-11 00:02:30.590056 | orchestrator | 2026-04-11 00:02:30.590060 | orchestrator | + network { 2026-04-11 00:02:30.590063 | orchestrator | + access_network = false 2026-04-11 00:02:30.590067 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 00:02:30.590071 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 00:02:30.590075 | orchestrator | + mac = (known after apply) 2026-04-11 00:02:30.590078 | orchestrator | + name = (known after apply) 2026-04-11 00:02:30.590082 | orchestrator | + port = (known after apply) 2026-04-11 00:02:30.590086 | orchestrator | + uuid = (known after apply) 2026-04-11 00:02:30.590090 | orchestrator | } 2026-04-11 00:02:30.590094 | orchestrator | } 2026-04-11 00:02:30.590100 | orchestrator | 2026-04-11 00:02:30.590104 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-11 00:02:30.590108 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-11 00:02:30.590112 | orchestrator | + fingerprint = (known after apply) 2026-04-11 00:02:30.590115 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590119 | orchestrator | + name = "testbed" 2026-04-11 00:02:30.590123 | orchestrator | + private_key = (sensitive value) 2026-04-11 00:02:30.590127 | orchestrator | + public_key = (known after apply) 2026-04-11 00:02:30.590130 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590134 | orchestrator | + user_id = (known after apply) 2026-04-11 00:02:30.590138 | orchestrator | } 2026-04-11 00:02:30.590142 | orchestrator | 2026-04-11 00:02:30.590146 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-11 00:02:30.590149 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590159 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590163 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590167 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590171 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590174 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590178 | orchestrator | } 2026-04-11 00:02:30.590182 | orchestrator | 2026-04-11 00:02:30.590186 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-11 00:02:30.590190 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590193 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590197 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590201 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590205 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590208 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590212 | orchestrator | } 2026-04-11 00:02:30.590216 | orchestrator | 2026-04-11 00:02:30.590220 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-11 00:02:30.590224 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590227 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590231 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590235 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590239 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590242 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590246 | orchestrator | } 2026-04-11 00:02:30.590250 | orchestrator | 2026-04-11 00:02:30.590254 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-11 00:02:30.590257 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590261 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590266 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590269 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590273 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590277 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590280 | orchestrator | } 2026-04-11 00:02:30.590284 | orchestrator | 2026-04-11 00:02:30.590288 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-11 00:02:30.590292 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590296 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590300 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590304 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590310 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590314 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590318 | orchestrator | } 2026-04-11 00:02:30.590322 | orchestrator | 2026-04-11 00:02:30.590326 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-11 00:02:30.590329 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590333 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590337 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590341 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590344 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590348 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590352 | orchestrator | } 2026-04-11 00:02:30.590356 | orchestrator | 2026-04-11 00:02:30.590362 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-11 00:02:30.590366 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590369 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590373 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590377 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590381 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590388 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590391 | orchestrator | } 2026-04-11 00:02:30.590395 | orchestrator | 2026-04-11 00:02:30.590399 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-11 00:02:30.590403 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590407 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590410 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590414 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590418 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590422 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590426 | orchestrator | } 2026-04-11 00:02:30.590429 | orchestrator | 2026-04-11 00:02:30.590433 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-11 00:02:30.590437 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 00:02:30.590441 | orchestrator | + device = (known after apply) 2026-04-11 00:02:30.590444 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590448 | orchestrator | + instance_id = (known after apply) 2026-04-11 00:02:30.590452 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590456 | orchestrator | + volume_id = (known after apply) 2026-04-11 00:02:30.590459 | orchestrator | } 2026-04-11 00:02:30.590463 | orchestrator | 2026-04-11 00:02:30.590467 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-11 00:02:30.590472 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-11 00:02:30.590475 | orchestrator | + fixed_ip = (known after apply) 2026-04-11 00:02:30.590479 | orchestrator | + floating_ip = (known after apply) 2026-04-11 00:02:30.590483 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590487 | orchestrator | + port_id = (known after apply) 2026-04-11 00:02:30.590490 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590494 | orchestrator | } 2026-04-11 00:02:30.590498 | orchestrator | 2026-04-11 00:02:30.590502 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-11 00:02:30.590506 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-11 00:02:30.590509 | orchestrator | + address = (known after apply) 2026-04-11 00:02:30.590513 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.590517 | orchestrator | + dns_domain = (known after apply) 2026-04-11 00:02:30.590521 | orchestrator | + dns_name = (known after apply) 2026-04-11 00:02:30.590525 | orchestrator | + fixed_ip = (known after apply) 2026-04-11 00:02:30.590528 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590532 | orchestrator | + pool = "public" 2026-04-11 00:02:30.590536 | orchestrator | + port_id = (known after apply) 2026-04-11 00:02:30.590540 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590544 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.590547 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.590551 | orchestrator | } 2026-04-11 00:02:30.590557 | orchestrator | 2026-04-11 00:02:30.590561 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-11 00:02:30.590564 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-11 00:02:30.590568 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.590572 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.590576 | orchestrator | + availability_zone_hints = [ 2026-04-11 00:02:30.590580 | orchestrator | + "nova", 2026-04-11 00:02:30.590583 | orchestrator | ] 2026-04-11 00:02:30.590587 | orchestrator | + dns_domain = (known after apply) 2026-04-11 00:02:30.590591 | orchestrator | + external = (known after apply) 2026-04-11 00:02:30.590595 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.590599 | orchestrator | + mtu = (known after apply) 2026-04-11 00:02:30.590602 | orchestrator | + name = "net-testbed-management" 2026-04-11 00:02:30.590606 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 00:02:30.590613 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 00:02:30.590617 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.590621 | orchestrator | + shared = (known after apply) 2026-04-11 00:02:30.590624 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.590628 | orchestrator | + transparent_vlan = (known after apply) 2026-04-11 00:02:30.590632 | orchestrator | 2026-04-11 00:02:30.590636 | orchestrator | + segments (known after apply) 2026-04-11 00:02:30.590640 | orchestrator | } 2026-04-11 00:02:30.590691 | orchestrator | 2026-04-11 00:02:30.590696 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-11 00:02:30.590700 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-11 00:02:30.590765 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.590866 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 00:02:30.590872 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 00:02:30.590924 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.590938 | orchestrator | + device_id = (known after apply) 2026-04-11 00:02:30.590942 | orchestrator | + device_owner = (known after apply) 2026-04-11 00:02:30.590946 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 00:02:30.590950 | orchestrator | + dns_name = (known after apply) 2026-04-11 00:02:30.591060 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.591097 | orchestrator | + mac_address = (known after apply) 2026-04-11 00:02:30.591101 | orchestrator | + network_id = (known after apply) 2026-04-11 00:02:30.591105 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 00:02:30.591109 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 00:02:30.591120 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.591167 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 00:02:30.591225 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.591230 | orchestrator | 2026-04-11 00:02:30.591317 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.591321 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 00:02:30.591378 | orchestrator | } 2026-04-11 00:02:30.591408 | orchestrator | 2026-04-11 00:02:30.591471 | orchestrator | + binding (known after apply) 2026-04-11 00:02:30.591550 | orchestrator | 2026-04-11 00:02:30.591569 | orchestrator | + fixed_ip { 2026-04-11 00:02:30.591588 | orchestrator | + ip_address = "192.168.16.5" 2026-04-11 00:02:30.591592 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.591603 | orchestrator | } 2026-04-11 00:02:30.591633 | orchestrator | } 2026-04-11 00:02:30.591681 | orchestrator | 2026-04-11 00:02:30.591687 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-11 00:02:30.591690 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 00:02:30.591694 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.591750 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 00:02:30.591780 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 00:02:30.591784 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.591788 | orchestrator | + device_id = (known after apply) 2026-04-11 00:02:30.591792 | orchestrator | + device_owner = (known after apply) 2026-04-11 00:02:30.591796 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 00:02:30.591886 | orchestrator | + dns_name = (known after apply) 2026-04-11 00:02:30.591890 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.591894 | orchestrator | + mac_address = (known after apply) 2026-04-11 00:02:30.591898 | orchestrator | + network_id = (known after apply) 2026-04-11 00:02:30.591917 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 00:02:30.591921 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 00:02:30.591925 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.591964 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 00:02:30.591982 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.591986 | orchestrator | 2026-04-11 00:02:30.592007 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.592012 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 00:02:30.592016 | orchestrator | } 2026-04-11 00:02:30.592020 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.592059 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 00:02:30.592064 | orchestrator | } 2026-04-11 00:02:30.592082 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.592087 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 00:02:30.592090 | orchestrator | } 2026-04-11 00:02:30.592132 | orchestrator | 2026-04-11 00:02:30.592137 | orchestrator | + binding (known after apply) 2026-04-11 00:02:30.592180 | orchestrator | 2026-04-11 00:02:30.592225 | orchestrator | + fixed_ip { 2026-04-11 00:02:30.592292 | orchestrator | + ip_address = "192.168.16.10" 2026-04-11 00:02:30.592297 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.592301 | orchestrator | } 2026-04-11 00:02:30.592320 | orchestrator | } 2026-04-11 00:02:30.592324 | orchestrator | 2026-04-11 00:02:30.592360 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-11 00:02:30.592364 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 00:02:30.592395 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.592400 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 00:02:30.592404 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 00:02:30.592407 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.592411 | orchestrator | + device_id = (known after apply) 2026-04-11 00:02:30.592442 | orchestrator | + device_owner = (known after apply) 2026-04-11 00:02:30.592446 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 00:02:30.592473 | orchestrator | + dns_name = (known after apply) 2026-04-11 00:02:30.592478 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.592516 | orchestrator | + mac_address = (known after apply) 2026-04-11 00:02:30.592521 | orchestrator | + network_id = (known after apply) 2026-04-11 00:02:30.592577 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 00:02:30.592588 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 00:02:30.592644 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.592655 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 00:02:30.592700 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.592705 | orchestrator | 2026-04-11 00:02:30.592734 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.592738 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 00:02:30.592742 | orchestrator | } 2026-04-11 00:02:30.592746 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.592802 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 00:02:30.592807 | orchestrator | } 2026-04-11 00:02:30.592916 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.592921 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 00:02:30.592950 | orchestrator | } 2026-04-11 00:02:30.593049 | orchestrator | 2026-04-11 00:02:30.593054 | orchestrator | + binding (known after apply) 2026-04-11 00:02:30.593099 | orchestrator | 2026-04-11 00:02:30.593130 | orchestrator | + fixed_ip { 2026-04-11 00:02:30.593135 | orchestrator | + ip_address = "192.168.16.11" 2026-04-11 00:02:30.593174 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.593178 | orchestrator | } 2026-04-11 00:02:30.593237 | orchestrator | } 2026-04-11 00:02:30.593271 | orchestrator | 2026-04-11 00:02:30.593275 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-11 00:02:30.593279 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 00:02:30.593283 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.593287 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 00:02:30.593291 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 00:02:30.593294 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.593379 | orchestrator | + device_id = (known after apply) 2026-04-11 00:02:30.593384 | orchestrator | + device_owner = (known after apply) 2026-04-11 00:02:30.593407 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 00:02:30.593426 | orchestrator | + dns_name = (known after apply) 2026-04-11 00:02:30.593434 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.593438 | orchestrator | + mac_address = (known after apply) 2026-04-11 00:02:30.593441 | orchestrator | + network_id = (known after apply) 2026-04-11 00:02:30.593486 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 00:02:30.593490 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 00:02:30.593494 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.593537 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 00:02:30.593541 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.593545 | orchestrator | 2026-04-11 00:02:30.593560 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593564 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 00:02:30.593568 | orchestrator | } 2026-04-11 00:02:30.593572 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593587 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 00:02:30.593640 | orchestrator | } 2026-04-11 00:02:30.593644 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593648 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 00:02:30.593652 | orchestrator | } 2026-04-11 00:02:30.593656 | orchestrator | 2026-04-11 00:02:30.593663 | orchestrator | + binding (known after apply) 2026-04-11 00:02:30.593667 | orchestrator | 2026-04-11 00:02:30.593671 | orchestrator | + fixed_ip { 2026-04-11 00:02:30.593675 | orchestrator | + ip_address = "192.168.16.12" 2026-04-11 00:02:30.593679 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.593683 | orchestrator | } 2026-04-11 00:02:30.593686 | orchestrator | } 2026-04-11 00:02:30.593690 | orchestrator | 2026-04-11 00:02:30.593694 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-11 00:02:30.593698 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 00:02:30.593702 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.593706 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 00:02:30.593709 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 00:02:30.593713 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.593717 | orchestrator | + device_id = (known after apply) 2026-04-11 00:02:30.593721 | orchestrator | + device_owner = (known after apply) 2026-04-11 00:02:30.593724 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 00:02:30.593728 | orchestrator | + dns_name = (known after apply) 2026-04-11 00:02:30.593732 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.593736 | orchestrator | + mac_address = (known after apply) 2026-04-11 00:02:30.593739 | orchestrator | + network_id = (known after apply) 2026-04-11 00:02:30.593743 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 00:02:30.593747 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 00:02:30.593751 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.593755 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 00:02:30.593758 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.593762 | orchestrator | 2026-04-11 00:02:30.593766 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593770 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 00:02:30.593773 | orchestrator | } 2026-04-11 00:02:30.593777 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593781 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 00:02:30.593785 | orchestrator | } 2026-04-11 00:02:30.593789 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593792 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 00:02:30.593796 | orchestrator | } 2026-04-11 00:02:30.593800 | orchestrator | 2026-04-11 00:02:30.593807 | orchestrator | + binding (known after apply) 2026-04-11 00:02:30.593811 | orchestrator | 2026-04-11 00:02:30.593815 | orchestrator | + fixed_ip { 2026-04-11 00:02:30.593819 | orchestrator | + ip_address = "192.168.16.13" 2026-04-11 00:02:30.593833 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.593837 | orchestrator | } 2026-04-11 00:02:30.593840 | orchestrator | } 2026-04-11 00:02:30.593844 | orchestrator | 2026-04-11 00:02:30.593848 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-11 00:02:30.593852 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 00:02:30.593855 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.593859 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 00:02:30.593863 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 00:02:30.593867 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.593871 | orchestrator | + device_id = (known after apply) 2026-04-11 00:02:30.593874 | orchestrator | + device_owner = (known after apply) 2026-04-11 00:02:30.593878 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 00:02:30.593882 | orchestrator | + dns_name = (known after apply) 2026-04-11 00:02:30.593886 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.593889 | orchestrator | + mac_address = (known after apply) 2026-04-11 00:02:30.593893 | orchestrator | + network_id = (known after apply) 2026-04-11 00:02:30.593897 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 00:02:30.593901 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 00:02:30.593904 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.593908 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 00:02:30.593912 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.593916 | orchestrator | 2026-04-11 00:02:30.593920 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593924 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 00:02:30.593928 | orchestrator | } 2026-04-11 00:02:30.593931 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593935 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 00:02:30.593939 | orchestrator | } 2026-04-11 00:02:30.593942 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.593946 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 00:02:30.593950 | orchestrator | } 2026-04-11 00:02:30.593954 | orchestrator | 2026-04-11 00:02:30.593957 | orchestrator | + binding (known after apply) 2026-04-11 00:02:30.593961 | orchestrator | 2026-04-11 00:02:30.593965 | orchestrator | + fixed_ip { 2026-04-11 00:02:30.593969 | orchestrator | + ip_address = "192.168.16.14" 2026-04-11 00:02:30.593972 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.593976 | orchestrator | } 2026-04-11 00:02:30.593980 | orchestrator | } 2026-04-11 00:02:30.593984 | orchestrator | 2026-04-11 00:02:30.593987 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-11 00:02:30.593991 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 00:02:30.593995 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.593999 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 00:02:30.594002 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 00:02:30.594006 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.594010 | orchestrator | + device_id = (known after apply) 2026-04-11 00:02:30.594035 | orchestrator | + device_owner = (known after apply) 2026-04-11 00:02:30.594040 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 00:02:30.594043 | orchestrator | + dns_name = (known after apply) 2026-04-11 00:02:30.594047 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594051 | orchestrator | + mac_address = (known after apply) 2026-04-11 00:02:30.594055 | orchestrator | + network_id = (known after apply) 2026-04-11 00:02:30.594058 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 00:02:30.594062 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 00:02:30.594070 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594074 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 00:02:30.594077 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594081 | orchestrator | 2026-04-11 00:02:30.594085 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.594089 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 00:02:30.594093 | orchestrator | } 2026-04-11 00:02:30.594096 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.594105 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 00:02:30.594108 | orchestrator | } 2026-04-11 00:02:30.594112 | orchestrator | + allowed_address_pairs { 2026-04-11 00:02:30.594116 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 00:02:30.594120 | orchestrator | } 2026-04-11 00:02:30.594123 | orchestrator | 2026-04-11 00:02:30.594130 | orchestrator | + binding (known after apply) 2026-04-11 00:02:30.594134 | orchestrator | 2026-04-11 00:02:30.594138 | orchestrator | + fixed_ip { 2026-04-11 00:02:30.594142 | orchestrator | + ip_address = "192.168.16.15" 2026-04-11 00:02:30.594146 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.594149 | orchestrator | } 2026-04-11 00:02:30.594153 | orchestrator | } 2026-04-11 00:02:30.594157 | orchestrator | 2026-04-11 00:02:30.594161 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-11 00:02:30.594164 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-11 00:02:30.594168 | orchestrator | + force_destroy = false 2026-04-11 00:02:30.594172 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594176 | orchestrator | + port_id = (known after apply) 2026-04-11 00:02:30.594179 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594183 | orchestrator | + router_id = (known after apply) 2026-04-11 00:02:30.594187 | orchestrator | + subnet_id = (known after apply) 2026-04-11 00:02:30.594190 | orchestrator | } 2026-04-11 00:02:30.594194 | orchestrator | 2026-04-11 00:02:30.594198 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-11 00:02:30.594202 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-11 00:02:30.594206 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 00:02:30.594209 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.594213 | orchestrator | + availability_zone_hints = [ 2026-04-11 00:02:30.594217 | orchestrator | + "nova", 2026-04-11 00:02:30.594221 | orchestrator | ] 2026-04-11 00:02:30.594224 | orchestrator | + distributed = (known after apply) 2026-04-11 00:02:30.594228 | orchestrator | + enable_snat = (known after apply) 2026-04-11 00:02:30.594232 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-11 00:02:30.594236 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-11 00:02:30.594239 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594243 | orchestrator | + name = "testbed" 2026-04-11 00:02:30.594247 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594251 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594255 | orchestrator | 2026-04-11 00:02:30.594258 | orchestrator | + external_fixed_ip (known after apply) 2026-04-11 00:02:30.594262 | orchestrator | } 2026-04-11 00:02:30.594266 | orchestrator | 2026-04-11 00:02:30.594270 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-11 00:02:30.594274 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-11 00:02:30.594278 | orchestrator | + description = "ssh" 2026-04-11 00:02:30.594281 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594285 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594289 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594293 | orchestrator | + port_range_max = 22 2026-04-11 00:02:30.594296 | orchestrator | + port_range_min = 22 2026-04-11 00:02:30.594300 | orchestrator | + protocol = "tcp" 2026-04-11 00:02:30.594304 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594311 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594315 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594318 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 00:02:30.594322 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594326 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594330 | orchestrator | } 2026-04-11 00:02:30.594333 | orchestrator | 2026-04-11 00:02:30.594337 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-11 00:02:30.594341 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-11 00:02:30.594345 | orchestrator | + description = "wireguard" 2026-04-11 00:02:30.594348 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594352 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594356 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594360 | orchestrator | + port_range_max = 51820 2026-04-11 00:02:30.594363 | orchestrator | + port_range_min = 51820 2026-04-11 00:02:30.594367 | orchestrator | + protocol = "udp" 2026-04-11 00:02:30.594371 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594375 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594378 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594382 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 00:02:30.594386 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594390 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594393 | orchestrator | } 2026-04-11 00:02:30.594397 | orchestrator | 2026-04-11 00:02:30.594401 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-11 00:02:30.594404 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-11 00:02:30.594408 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594412 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594416 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594419 | orchestrator | + protocol = "tcp" 2026-04-11 00:02:30.594423 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594427 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594430 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594434 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-11 00:02:30.594438 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594442 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594445 | orchestrator | } 2026-04-11 00:02:30.594449 | orchestrator | 2026-04-11 00:02:30.594453 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-11 00:02:30.594457 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-11 00:02:30.594460 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594464 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594468 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594474 | orchestrator | + protocol = "udp" 2026-04-11 00:02:30.594478 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594482 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594486 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594489 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-11 00:02:30.594493 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594497 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594501 | orchestrator | } 2026-04-11 00:02:30.594504 | orchestrator | 2026-04-11 00:02:30.594508 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-11 00:02:30.594515 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-11 00:02:30.594519 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594523 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594526 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594530 | orchestrator | + protocol = "icmp" 2026-04-11 00:02:30.594534 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594538 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594541 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594545 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 00:02:30.594549 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594553 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594556 | orchestrator | } 2026-04-11 00:02:30.594560 | orchestrator | 2026-04-11 00:02:30.594564 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-11 00:02:30.594568 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-11 00:02:30.594571 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594575 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594579 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594583 | orchestrator | + protocol = "tcp" 2026-04-11 00:02:30.594587 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594590 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594597 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594601 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 00:02:30.594605 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594609 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594612 | orchestrator | } 2026-04-11 00:02:30.594616 | orchestrator | 2026-04-11 00:02:30.594620 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-11 00:02:30.594624 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-11 00:02:30.594627 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594631 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594635 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594639 | orchestrator | + protocol = "udp" 2026-04-11 00:02:30.594642 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594646 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594650 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594654 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 00:02:30.594657 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594661 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594665 | orchestrator | } 2026-04-11 00:02:30.594668 | orchestrator | 2026-04-11 00:02:30.594672 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-11 00:02:30.594676 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-11 00:02:30.594680 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594686 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594690 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594693 | orchestrator | + protocol = "icmp" 2026-04-11 00:02:30.594697 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594701 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594705 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594708 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 00:02:30.594712 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594716 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594725 | orchestrator | } 2026-04-11 00:02:30.594729 | orchestrator | 2026-04-11 00:02:30.594733 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-11 00:02:30.594736 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-11 00:02:30.594740 | orchestrator | + description = "vrrp" 2026-04-11 00:02:30.594744 | orchestrator | + direction = "ingress" 2026-04-11 00:02:30.594748 | orchestrator | + ethertype = "IPv4" 2026-04-11 00:02:30.594751 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594755 | orchestrator | + protocol = "112" 2026-04-11 00:02:30.594759 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594763 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 00:02:30.594766 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 00:02:30.594770 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 00:02:30.594774 | orchestrator | + security_group_id = (known after apply) 2026-04-11 00:02:30.594777 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594781 | orchestrator | } 2026-04-11 00:02:30.594785 | orchestrator | 2026-04-11 00:02:30.594789 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-11 00:02:30.594793 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-11 00:02:30.594796 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.594800 | orchestrator | + description = "management security group" 2026-04-11 00:02:30.594804 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.594808 | orchestrator | + name = "testbed-management" 2026-04-11 00:02:30.594815 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.594818 | orchestrator | + stateful = (known after apply) 2026-04-11 00:02:30.594842 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.594846 | orchestrator | } 2026-04-11 00:02:30.594850 | orchestrator | 2026-04-11 00:02:30.594854 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-11 00:02:30.594858 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-11 00:02:30.595493 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.595516 | orchestrator | + description = "node security group" 2026-04-11 00:02:30.595520 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.595524 | orchestrator | + name = "testbed-node" 2026-04-11 00:02:30.595544 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.595548 | orchestrator | + stateful = (known after apply) 2026-04-11 00:02:30.595552 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.595555 | orchestrator | } 2026-04-11 00:02:30.595559 | orchestrator | 2026-04-11 00:02:30.595563 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-11 00:02:30.595568 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-11 00:02:30.595572 | orchestrator | + all_tags = (known after apply) 2026-04-11 00:02:30.595576 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-11 00:02:30.595580 | orchestrator | + dns_nameservers = [ 2026-04-11 00:02:30.595584 | orchestrator | + "8.8.8.8", 2026-04-11 00:02:30.595588 | orchestrator | + "9.9.9.9", 2026-04-11 00:02:30.595592 | orchestrator | ] 2026-04-11 00:02:30.595596 | orchestrator | + enable_dhcp = true 2026-04-11 00:02:30.595600 | orchestrator | + gateway_ip = (known after apply) 2026-04-11 00:02:30.595604 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.595608 | orchestrator | + ip_version = 4 2026-04-11 00:02:30.595612 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-11 00:02:30.595615 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-11 00:02:30.595619 | orchestrator | + name = "subnet-testbed-management" 2026-04-11 00:02:30.595623 | orchestrator | + network_id = (known after apply) 2026-04-11 00:02:30.595627 | orchestrator | + no_gateway = false 2026-04-11 00:02:30.595630 | orchestrator | + region = (known after apply) 2026-04-11 00:02:30.595634 | orchestrator | + service_types = (known after apply) 2026-04-11 00:02:30.595647 | orchestrator | + tenant_id = (known after apply) 2026-04-11 00:02:30.595651 | orchestrator | 2026-04-11 00:02:30.595655 | orchestrator | + allocation_pool { 2026-04-11 00:02:30.595659 | orchestrator | + end = "192.168.31.250" 2026-04-11 00:02:30.595663 | orchestrator | + start = "192.168.31.200" 2026-04-11 00:02:30.595667 | orchestrator | } 2026-04-11 00:02:30.595670 | orchestrator | } 2026-04-11 00:02:30.595674 | orchestrator | 2026-04-11 00:02:30.595678 | orchestrator | # terraform_data.image will be created 2026-04-11 00:02:30.595682 | orchestrator | + resource "terraform_data" "image" { 2026-04-11 00:02:30.595686 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.595689 | orchestrator | + input = "Ubuntu 24.04" 2026-04-11 00:02:30.595693 | orchestrator | + output = (known after apply) 2026-04-11 00:02:30.595697 | orchestrator | } 2026-04-11 00:02:30.595701 | orchestrator | 2026-04-11 00:02:30.595705 | orchestrator | # terraform_data.image_node will be created 2026-04-11 00:02:30.595709 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-11 00:02:30.595712 | orchestrator | + id = (known after apply) 2026-04-11 00:02:30.595716 | orchestrator | + input = "Ubuntu 24.04" 2026-04-11 00:02:30.595729 | orchestrator | + output = (known after apply) 2026-04-11 00:02:30.595733 | orchestrator | } 2026-04-11 00:02:30.595736 | orchestrator | 2026-04-11 00:02:30.595740 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-11 00:02:30.595751 | orchestrator | 2026-04-11 00:02:30.595762 | orchestrator | Changes to Outputs: 2026-04-11 00:02:30.595772 | orchestrator | + manager_address = (sensitive value) 2026-04-11 00:02:30.595777 | orchestrator | + private_key = (sensitive value) 2026-04-11 00:02:30.827465 | orchestrator | terraform_data.image_node: Creating... 2026-04-11 00:02:30.827541 | orchestrator | terraform_data.image: Creating... 2026-04-11 00:02:30.827563 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=16b525a7-076f-4cfb-d2df-2050920b1871] 2026-04-11 00:02:30.827965 | orchestrator | terraform_data.image: Creation complete after 0s [id=325c1358-7554-ebd2-a475-5705f479b69c] 2026-04-11 00:02:30.842670 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-11 00:02:30.842970 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-11 00:02:30.851727 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-11 00:02:30.852598 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-11 00:02:30.852903 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-11 00:02:30.853147 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-11 00:02:30.856173 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-11 00:02:30.856190 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-11 00:02:30.856194 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-11 00:02:30.856199 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-11 00:02:31.317008 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-11 00:02:31.323977 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-11 00:02:31.329581 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-11 00:02:31.336185 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-11 00:02:31.340352 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-11 00:02:31.343657 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-11 00:02:31.807787 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=323089bb-0286-42f1-a65d-7d463a725bc1] 2026-04-11 00:02:31.817246 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-11 00:02:34.500862 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=73d9f47d-c224-4fc5-85fe-6159d35bcb3b] 2026-04-11 00:02:34.503361 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=baefa5b6-df7d-4524-a048-85b2fa508a08] 2026-04-11 00:02:34.510375 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-11 00:02:34.514101 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-11 00:02:34.522333 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=95d65739-9dbf-4d48-9a83-eea9c175e7b9] 2026-04-11 00:02:34.525619 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=a6786cac-aebf-4828-8780-9d987947f514] 2026-04-11 00:02:34.526510 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-11 00:02:34.530723 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-11 00:02:34.547660 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=5beaa40e-2504-4a54-8b0e-fa95ef0fce86] 2026-04-11 00:02:34.562802 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-11 00:02:34.563195 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=766868e9-fa25-49fa-8c1f-44c6e6ffb547] 2026-04-11 00:02:34.568514 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-11 00:02:34.619460 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=94d6c339-d22c-4552-bea0-bfaa215148a5] 2026-04-11 00:02:34.627694 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=f9459d28-2556-4a8a-b254-3aab23978995] 2026-04-11 00:02:34.637886 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-11 00:02:34.642455 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-11 00:02:34.645786 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=dbdf8861-c941-4e45-ade6-c92a4c866684] 2026-04-11 00:02:34.650317 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=8e4fdcb3373352d5960959944d498270286594a4] 2026-04-11 00:02:34.650959 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=3f88702487abf07a275c74454888df815d1f84c0] 2026-04-11 00:02:34.653181 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-11 00:02:35.171497 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=19f6c328-a561-4897-bc92-2bbedec03b22] 2026-04-11 00:02:35.622968 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=e9e2b4d0-c602-45b3-9aeb-208ddd395891] 2026-04-11 00:02:35.636353 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-11 00:02:37.943776 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=b25588ff-c947-4792-a589-2d084d0eef3c] 2026-04-11 00:02:38.002877 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=ec80c904-46fb-4b7e-ade7-56cb246a0394] 2026-04-11 00:02:38.020359 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=6b55b66d-3192-4580-be0d-0026f43c659c] 2026-04-11 00:02:38.058086 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=d951abfa-6c54-4adb-91f6-ed79e55ad62b] 2026-04-11 00:02:38.061246 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=ffe3b7cb-129f-49c7-87a5-a6b31be356ef] 2026-04-11 00:02:38.083264 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=27525bbe-abd4-48e3-93a9-60b04238c189] 2026-04-11 00:02:39.275175 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=a5f5d4f4-0cd3-4ef1-ab69-424d766e8023] 2026-04-11 00:02:39.282491 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-11 00:02:39.282557 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-11 00:02:39.283794 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-11 00:02:39.469619 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4ea1c06a-7cbe-40a9-84ce-acccc1db66b5] 2026-04-11 00:02:39.476457 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-11 00:02:39.476674 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-11 00:02:39.479105 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-11 00:02:39.479715 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-11 00:02:39.482194 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-11 00:02:39.485070 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-11 00:02:39.611112 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=3d682cd0-874f-414c-9d72-1940b548c3b8] 2026-04-11 00:02:39.627742 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-11 00:02:39.630348 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-11 00:02:39.631518 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-11 00:02:39.651740 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=0403240d-b303-41dc-a167-3d52097ecdf3] 2026-04-11 00:02:39.663565 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-11 00:02:39.869307 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=bfae7988-2c9b-4400-956b-a416140ed5b2] 2026-04-11 00:02:39.882239 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-11 00:02:40.272537 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=e1d2b0c2-6da0-42e8-b16c-b80f35010f4c] 2026-04-11 00:02:40.283049 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-11 00:02:40.320959 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=d5c72eea-71b3-4034-b533-ded7ce17674e] 2026-04-11 00:02:40.325573 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=40fdd82c-89ba-4707-a35a-679cd34bd196] 2026-04-11 00:02:40.328172 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-11 00:02:40.335472 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-11 00:02:40.611702 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=2675e5f3-e203-4e41-9155-40821df53afb] 2026-04-11 00:02:40.617592 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-11 00:02:40.724482 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=9dab55fe-1305-4b29-a82e-4e644bfb8696] 2026-04-11 00:02:40.731138 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-11 00:02:40.827046 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=9d87224b-605b-4397-8197-29fbca720203] 2026-04-11 00:02:40.986410 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=1fd5e369-e8bb-4a8d-997b-24abc6376b8a] 2026-04-11 00:02:41.095776 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=fe3f79b4-34eb-4e05-9792-164843e7f1fb] 2026-04-11 00:02:41.209146 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=2f210c2b-dffa-457b-8eec-d6aef2448544] 2026-04-11 00:02:41.396126 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=7d85dea2-fcec-41ea-8918-1a5aaa906c9a] 2026-04-11 00:02:41.592547 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=de529e63-8638-4c26-89cd-d2f42cfa0470] 2026-04-11 00:02:41.624366 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 3s [id=ee68d05c-72e3-47cd-b7b7-c5ee404aea3d] 2026-04-11 00:02:41.741276 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=cee7ffa5-cf29-4376-9e7c-f48b9b88bde6] 2026-04-11 00:02:42.189804 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b4380fd5-a3c8-4033-b143-69b58d8975d3] 2026-04-11 00:02:42.595584 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=62610ef3-4de5-45cb-96e5-bf43eae212f6] 2026-04-11 00:02:42.616602 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-11 00:02:42.627435 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-11 00:02:42.629237 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-11 00:02:42.630635 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-11 00:02:42.642125 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-11 00:02:42.643463 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-11 00:02:42.645178 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-11 00:02:44.357749 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=a1fa995c-9410-445b-bb87-7a35b23e46a4] 2026-04-11 00:02:44.376442 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-11 00:02:44.379734 | orchestrator | local_file.inventory: Creating... 2026-04-11 00:02:44.383797 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-11 00:02:44.707812 | orchestrator | local_file.inventory: Creation complete after 1s [id=c663c853411167f79e4ab2e6d823f3fed56dd9d0] 2026-04-11 00:02:44.820320 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 1s [id=aed9d8db15edc54ea5a39becf9845d76b8aef273] 2026-04-11 00:02:45.968370 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=a1fa995c-9410-445b-bb87-7a35b23e46a4] 2026-04-11 00:02:52.631936 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-11 00:02:52.632122 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-11 00:02:52.632168 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-11 00:02:52.643260 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-11 00:02:52.644477 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-11 00:02:52.645655 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-11 00:03:02.640322 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-11 00:03:02.640438 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-11 00:03:02.640453 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-11 00:03:02.643647 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-11 00:03:02.644807 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-11 00:03:02.646135 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-11 00:03:12.648118 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-11 00:03:12.648268 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-11 00:03:12.648297 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-11 00:03:12.648318 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-11 00:03:12.648338 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-11 00:03:12.648358 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-11 00:03:13.898440 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=4aa0b010-ff94-4989-813c-dc2c92ddc5d6] 2026-04-11 00:03:22.656204 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-11 00:03:22.656311 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-11 00:03:22.656332 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-04-11 00:03:22.656341 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-11 00:03:22.656375 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-11 00:03:24.112766 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=ed92ea17-928d-401f-abd1-bfd2920eba4c] 2026-04-11 00:03:25.552351 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 43s [id=5842a85a-a4c5-4f86-bbc0-4f2cdf83282b] 2026-04-11 00:03:32.656622 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-11 00:03:32.656743 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-11 00:03:32.656753 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-04-11 00:03:42.665489 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-04-11 00:03:42.665608 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-04-11 00:03:42.665621 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-04-11 00:03:44.593844 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m2s [id=3e5ba92e-8d93-45f3-8f93-627552461cb4] 2026-04-11 00:03:52.673379 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-04-11 00:03:52.673472 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m10s elapsed] 2026-04-11 00:03:54.528577 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m12s [id=a3a40be9-931b-4757-86ba-7f6e0bfca044] 2026-04-11 00:04:02.674278 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m20s elapsed] 2026-04-11 00:04:04.421065 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m21s [id=0fe506d2-aa3f-4c19-8456-bcb3c697bf7d] 2026-04-11 00:04:04.444520 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-11 00:04:04.445549 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5281171181120317017] 2026-04-11 00:04:04.447328 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-11 00:04:04.451613 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-11 00:04:04.452083 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-11 00:04:04.452145 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-11 00:04:04.452242 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-11 00:04:04.452607 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-11 00:04:04.459908 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-11 00:04:04.467464 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-11 00:04:04.468045 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-11 00:04:04.470372 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-11 00:04:07.867892 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=4aa0b010-ff94-4989-813c-dc2c92ddc5d6/73d9f47d-c224-4fc5-85fe-6159d35bcb3b] 2026-04-11 00:04:07.896194 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=5842a85a-a4c5-4f86-bbc0-4f2cdf83282b/a6786cac-aebf-4828-8780-9d987947f514] 2026-04-11 00:04:07.896496 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=ed92ea17-928d-401f-abd1-bfd2920eba4c/766868e9-fa25-49fa-8c1f-44c6e6ffb547] 2026-04-11 00:04:07.930757 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=4aa0b010-ff94-4989-813c-dc2c92ddc5d6/5beaa40e-2504-4a54-8b0e-fa95ef0fce86] 2026-04-11 00:04:07.958571 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=5842a85a-a4c5-4f86-bbc0-4f2cdf83282b/f9459d28-2556-4a8a-b254-3aab23978995] 2026-04-11 00:04:07.964889 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=ed92ea17-928d-401f-abd1-bfd2920eba4c/baefa5b6-df7d-4524-a048-85b2fa508a08] 2026-04-11 00:04:14.039736 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=4aa0b010-ff94-4989-813c-dc2c92ddc5d6/94d6c339-d22c-4552-bea0-bfaa215148a5] 2026-04-11 00:04:14.048274 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=ed92ea17-928d-401f-abd1-bfd2920eba4c/dbdf8861-c941-4e45-ade6-c92a4c866684] 2026-04-11 00:04:14.075522 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=5842a85a-a4c5-4f86-bbc0-4f2cdf83282b/95d65739-9dbf-4d48-9a83-eea9c175e7b9] 2026-04-11 00:04:14.473271 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-11 00:04:24.477645 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-11 00:04:25.179310 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=84bbf176-59df-43e0-bbcc-72a35ccdf2da] 2026-04-11 00:04:27.386277 | orchestrator | 2026-04-11 00:04:27.386370 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-11 00:04:27.386416 | orchestrator | 2026-04-11 00:04:27.386440 | orchestrator | Outputs: 2026-04-11 00:04:27.386448 | orchestrator | 2026-04-11 00:04:27.386490 | orchestrator | manager_address = 2026-04-11 00:04:27.386501 | orchestrator | private_key = 2026-04-11 00:04:27.468453 | orchestrator | ok: Runtime: 0:02:02.708108 2026-04-11 00:04:27.520591 | 2026-04-11 00:04:27.520757 | TASK [Fetch manager address] 2026-04-11 00:04:27.995787 | orchestrator | ok 2026-04-11 00:04:28.006269 | 2026-04-11 00:04:28.006411 | TASK [Set manager_host address] 2026-04-11 00:04:28.077091 | orchestrator | ok 2026-04-11 00:04:28.086280 | 2026-04-11 00:04:28.086411 | LOOP [Update ansible collections] 2026-04-11 00:04:29.132587 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-11 00:04:29.132865 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-11 00:04:29.132903 | orchestrator | Starting galaxy collection install process 2026-04-11 00:04:29.132928 | orchestrator | Process install dependency map 2026-04-11 00:04:29.132950 | orchestrator | Starting collection install process 2026-04-11 00:04:29.132970 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-04-11 00:04:29.132994 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-04-11 00:04:29.133019 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-11 00:04:29.133066 | orchestrator | ok: Item: commons Runtime: 0:00:00.681255 2026-04-11 00:04:30.147099 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-11 00:04:30.147312 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-11 00:04:30.147361 | orchestrator | Starting galaxy collection install process 2026-04-11 00:04:30.147398 | orchestrator | Process install dependency map 2026-04-11 00:04:30.147433 | orchestrator | Starting collection install process 2026-04-11 00:04:30.147466 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-04-11 00:04:30.147499 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-04-11 00:04:30.147531 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-11 00:04:30.147581 | orchestrator | ok: Item: services Runtime: 0:00:00.667778 2026-04-11 00:04:30.165333 | 2026-04-11 00:04:30.165500 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-11 00:04:41.199933 | orchestrator | ok 2026-04-11 00:04:41.210257 | 2026-04-11 00:04:41.210395 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-11 00:05:41.256566 | orchestrator | ok 2026-04-11 00:05:41.267627 | 2026-04-11 00:05:41.267741 | TASK [Fetch manager ssh hostkey] 2026-04-11 00:05:42.858719 | orchestrator | Output suppressed because no_log was given 2026-04-11 00:05:42.877314 | 2026-04-11 00:05:42.877856 | TASK [Get ssh keypair from terraform environment] 2026-04-11 00:05:43.433300 | orchestrator | ok: Runtime: 0:00:00.011340 2026-04-11 00:05:43.441568 | 2026-04-11 00:05:43.441691 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-11 00:05:43.482559 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-11 00:05:43.492324 | 2026-04-11 00:05:43.492444 | TASK [Run manager part 0] 2026-04-11 00:05:44.583372 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-11 00:05:44.650170 | orchestrator | 2026-04-11 00:05:44.650241 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-11 00:05:44.650252 | orchestrator | 2026-04-11 00:05:44.650271 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-11 00:05:47.031149 | orchestrator | ok: [testbed-manager] 2026-04-11 00:05:47.031234 | orchestrator | 2026-04-11 00:05:47.031268 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-11 00:05:47.031281 | orchestrator | 2026-04-11 00:05:47.031294 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:05:49.056429 | orchestrator | ok: [testbed-manager] 2026-04-11 00:05:49.056508 | orchestrator | 2026-04-11 00:05:49.056515 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-11 00:05:49.767103 | orchestrator | ok: [testbed-manager] 2026-04-11 00:05:49.767163 | orchestrator | 2026-04-11 00:05:49.767215 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-11 00:05:49.809155 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:05:49.809235 | orchestrator | 2026-04-11 00:05:49.809248 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-11 00:05:49.855034 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:05:49.855092 | orchestrator | 2026-04-11 00:05:49.855103 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-11 00:05:49.893053 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:05:49.893113 | orchestrator | 2026-04-11 00:05:49.893122 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-11 00:05:50.621917 | orchestrator | changed: [testbed-manager] 2026-04-11 00:05:50.621968 | orchestrator | 2026-04-11 00:05:50.621974 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-11 00:08:30.882855 | orchestrator | changed: [testbed-manager] 2026-04-11 00:08:30.883038 | orchestrator | 2026-04-11 00:08:30.883055 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-11 00:09:50.854304 | orchestrator | changed: [testbed-manager] 2026-04-11 00:09:50.854405 | orchestrator | 2026-04-11 00:09:50.854426 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-11 00:10:11.456318 | orchestrator | changed: [testbed-manager] 2026-04-11 00:10:11.456415 | orchestrator | 2026-04-11 00:10:11.456435 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-11 00:10:20.299712 | orchestrator | changed: [testbed-manager] 2026-04-11 00:10:20.300142 | orchestrator | 2026-04-11 00:10:20.300179 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-11 00:10:20.348493 | orchestrator | ok: [testbed-manager] 2026-04-11 00:10:20.348572 | orchestrator | 2026-04-11 00:10:20.348589 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-11 00:10:21.156002 | orchestrator | ok: [testbed-manager] 2026-04-11 00:10:21.156174 | orchestrator | 2026-04-11 00:10:21.156184 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-11 00:10:21.889015 | orchestrator | changed: [testbed-manager] 2026-04-11 00:10:21.889071 | orchestrator | 2026-04-11 00:10:21.889087 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-11 00:10:28.115903 | orchestrator | changed: [testbed-manager] 2026-04-11 00:10:28.115978 | orchestrator | 2026-04-11 00:10:28.115991 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-11 00:10:34.058443 | orchestrator | changed: [testbed-manager] 2026-04-11 00:10:34.058543 | orchestrator | 2026-04-11 00:10:34.058560 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-11 00:10:36.752294 | orchestrator | changed: [testbed-manager] 2026-04-11 00:10:36.752386 | orchestrator | 2026-04-11 00:10:36.752402 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-11 00:10:38.493306 | orchestrator | changed: [testbed-manager] 2026-04-11 00:10:38.493396 | orchestrator | 2026-04-11 00:10:38.493412 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-11 00:10:39.644475 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-11 00:10:39.644574 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-11 00:10:39.644587 | orchestrator | 2026-04-11 00:10:39.644599 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-11 00:10:39.694498 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-11 00:10:39.694580 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-11 00:10:39.694595 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-11 00:10:39.694610 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-11 00:10:42.926578 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-11 00:10:42.926618 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-11 00:10:42.926623 | orchestrator | 2026-04-11 00:10:42.926628 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-11 00:10:43.474064 | orchestrator | changed: [testbed-manager] 2026-04-11 00:10:43.474109 | orchestrator | 2026-04-11 00:10:43.474118 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-11 00:13:06.226631 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-11 00:13:06.226824 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-11 00:13:06.226836 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-11 00:13:06.226843 | orchestrator | 2026-04-11 00:13:06.226851 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-11 00:13:08.416407 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-11 00:13:08.416492 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-11 00:13:08.416504 | orchestrator | 2026-04-11 00:13:08.416517 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-11 00:13:08.416527 | orchestrator | 2026-04-11 00:13:08.416541 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:13:09.725808 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:09.725842 | orchestrator | 2026-04-11 00:13:09.725848 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-11 00:13:09.764888 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:09.764921 | orchestrator | 2026-04-11 00:13:09.764927 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-11 00:13:09.828078 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:09.828133 | orchestrator | 2026-04-11 00:13:09.828141 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-11 00:13:10.563945 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:10.564032 | orchestrator | 2026-04-11 00:13:10.564051 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-11 00:13:11.248097 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:11.248184 | orchestrator | 2026-04-11 00:13:11.248200 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-11 00:13:12.589686 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-11 00:13:12.589737 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-11 00:13:12.589745 | orchestrator | 2026-04-11 00:13:12.589753 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-11 00:13:14.162114 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:14.162194 | orchestrator | 2026-04-11 00:13:14.162208 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-11 00:13:15.904187 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 00:13:15.904276 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-11 00:13:15.904309 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-11 00:13:15.904318 | orchestrator | 2026-04-11 00:13:15.904327 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-11 00:13:15.960997 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:15.961051 | orchestrator | 2026-04-11 00:13:15.961058 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-11 00:13:16.032999 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:16.033056 | orchestrator | 2026-04-11 00:13:16.033061 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-11 00:13:16.600467 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:16.600544 | orchestrator | 2026-04-11 00:13:16.600557 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-11 00:13:16.675116 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:16.675199 | orchestrator | 2026-04-11 00:13:16.675214 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-11 00:13:17.514041 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 00:13:17.514097 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:17.514105 | orchestrator | 2026-04-11 00:13:17.514111 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-11 00:13:17.549863 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:17.549920 | orchestrator | 2026-04-11 00:13:17.549929 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-11 00:13:17.584685 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:17.584741 | orchestrator | 2026-04-11 00:13:17.584751 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-11 00:13:17.623000 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:17.623055 | orchestrator | 2026-04-11 00:13:17.623062 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-11 00:13:17.705935 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:17.705995 | orchestrator | 2026-04-11 00:13:17.706007 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-11 00:13:18.467213 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:18.467297 | orchestrator | 2026-04-11 00:13:18.467304 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-11 00:13:18.467309 | orchestrator | 2026-04-11 00:13:18.467315 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:13:19.836516 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:19.836603 | orchestrator | 2026-04-11 00:13:19.836619 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-11 00:13:20.798167 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:20.798279 | orchestrator | 2026-04-11 00:13:20.798297 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:13:20.798312 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-11 00:13:20.798324 | orchestrator | 2026-04-11 00:13:21.364056 | orchestrator | ok: Runtime: 0:07:37.049508 2026-04-11 00:13:21.385157 | 2026-04-11 00:13:21.385328 | TASK [Point out that the log in on the manager is now possible] 2026-04-11 00:13:21.434334 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-11 00:13:21.444127 | 2026-04-11 00:13:21.444305 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-11 00:13:21.482352 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-11 00:13:21.491726 | 2026-04-11 00:13:21.491865 | TASK [Run manager part 1 + 2] 2026-04-11 00:13:22.392791 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-11 00:13:22.451396 | orchestrator | 2026-04-11 00:13:22.451435 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-11 00:13:22.451443 | orchestrator | 2026-04-11 00:13:22.451455 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:13:25.281605 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:25.281647 | orchestrator | 2026-04-11 00:13:25.281668 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-11 00:13:25.328536 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:25.328583 | orchestrator | 2026-04-11 00:13:25.328597 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-11 00:13:25.379649 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:25.379690 | orchestrator | 2026-04-11 00:13:25.379699 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-11 00:13:25.433188 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:25.433227 | orchestrator | 2026-04-11 00:13:25.433396 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-11 00:13:25.501034 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:25.501073 | orchestrator | 2026-04-11 00:13:25.501080 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-11 00:13:25.572470 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:25.572507 | orchestrator | 2026-04-11 00:13:25.572515 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-11 00:13:25.614694 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-11 00:13:25.614726 | orchestrator | 2026-04-11 00:13:25.614732 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-11 00:13:26.323128 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:26.323180 | orchestrator | 2026-04-11 00:13:26.323192 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-11 00:13:26.374559 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:26.374595 | orchestrator | 2026-04-11 00:13:26.374602 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-11 00:13:27.719491 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:27.719534 | orchestrator | 2026-04-11 00:13:27.719544 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-11 00:13:28.264062 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:28.264100 | orchestrator | 2026-04-11 00:13:28.264108 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-11 00:13:29.385084 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:29.385153 | orchestrator | 2026-04-11 00:13:29.385172 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-11 00:13:45.728820 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:45.728918 | orchestrator | 2026-04-11 00:13:45.728936 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-11 00:13:46.429788 | orchestrator | ok: [testbed-manager] 2026-04-11 00:13:46.429838 | orchestrator | 2026-04-11 00:13:46.429851 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-11 00:13:46.483517 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:13:46.483555 | orchestrator | 2026-04-11 00:13:46.483562 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-11 00:13:47.468681 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:47.468743 | orchestrator | 2026-04-11 00:13:47.468756 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-11 00:13:48.431746 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:48.431838 | orchestrator | 2026-04-11 00:13:48.431856 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-11 00:13:48.974643 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:48.974726 | orchestrator | 2026-04-11 00:13:48.974742 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-11 00:13:49.005685 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-11 00:13:49.005794 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-11 00:13:49.005816 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-11 00:13:49.005831 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-11 00:13:52.211953 | orchestrator | changed: [testbed-manager] 2026-04-11 00:13:52.212032 | orchestrator | 2026-04-11 00:13:52.212045 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-11 00:14:01.044621 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-11 00:14:01.044722 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-11 00:14:01.044740 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-11 00:14:01.044753 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-11 00:14:01.044774 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-11 00:14:01.044785 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-11 00:14:01.044797 | orchestrator | 2026-04-11 00:14:01.044810 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-11 00:14:02.104326 | orchestrator | changed: [testbed-manager] 2026-04-11 00:14:02.104416 | orchestrator | 2026-04-11 00:14:02.104570 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-11 00:14:05.097777 | orchestrator | changed: [testbed-manager] 2026-04-11 00:14:05.097823 | orchestrator | 2026-04-11 00:14:05.097832 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-11 00:14:05.139855 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:14:05.139964 | orchestrator | 2026-04-11 00:14:05.140346 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-11 00:15:45.470909 | orchestrator | changed: [testbed-manager] 2026-04-11 00:15:45.471013 | orchestrator | 2026-04-11 00:15:45.471039 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-11 00:15:46.569697 | orchestrator | ok: [testbed-manager] 2026-04-11 00:15:46.569738 | orchestrator | 2026-04-11 00:15:46.569746 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:15:46.569754 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-11 00:15:46.569759 | orchestrator | 2026-04-11 00:15:47.125707 | orchestrator | ok: Runtime: 0:02:24.875557 2026-04-11 00:15:47.144278 | 2026-04-11 00:15:47.144435 | TASK [Reboot manager] 2026-04-11 00:15:48.683104 | orchestrator | ok: Runtime: 0:00:00.948566 2026-04-11 00:15:48.698700 | 2026-04-11 00:15:48.699004 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-11 00:16:03.542247 | orchestrator | ok 2026-04-11 00:16:03.551778 | 2026-04-11 00:16:03.551902 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-11 00:17:03.597762 | orchestrator | ok 2026-04-11 00:17:03.607656 | 2026-04-11 00:17:03.607791 | TASK [Deploy manager + bootstrap nodes] 2026-04-11 00:17:05.898412 | orchestrator | 2026-04-11 00:17:05.898621 | orchestrator | # DEPLOY MANAGER 2026-04-11 00:17:05.898648 | orchestrator | 2026-04-11 00:17:05.898664 | orchestrator | + set -e 2026-04-11 00:17:05.898678 | orchestrator | + echo 2026-04-11 00:17:05.898693 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-11 00:17:05.898710 | orchestrator | + echo 2026-04-11 00:17:05.898759 | orchestrator | + cat /opt/manager-vars.sh 2026-04-11 00:17:05.901878 | orchestrator | export NUMBER_OF_NODES=6 2026-04-11 00:17:05.901922 | orchestrator | 2026-04-11 00:17:05.901936 | orchestrator | export CEPH_VERSION=reef 2026-04-11 00:17:05.901949 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-11 00:17:05.901961 | orchestrator | export MANAGER_VERSION=10.0.0 2026-04-11 00:17:05.901983 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-11 00:17:05.901994 | orchestrator | 2026-04-11 00:17:05.902045 | orchestrator | export ARA=false 2026-04-11 00:17:05.902060 | orchestrator | export DEPLOY_MODE=manager 2026-04-11 00:17:05.902078 | orchestrator | export TEMPEST=true 2026-04-11 00:17:05.902089 | orchestrator | export IS_ZUUL=true 2026-04-11 00:17:05.902100 | orchestrator | 2026-04-11 00:17:05.902119 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.75 2026-04-11 00:17:05.902130 | orchestrator | export EXTERNAL_API=false 2026-04-11 00:17:05.902141 | orchestrator | 2026-04-11 00:17:05.902152 | orchestrator | export IMAGE_USER=ubuntu 2026-04-11 00:17:05.902166 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-11 00:17:05.902177 | orchestrator | 2026-04-11 00:17:05.902188 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-11 00:17:05.902206 | orchestrator | 2026-04-11 00:17:05.902218 | orchestrator | + echo 2026-04-11 00:17:05.902231 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 00:17:05.902988 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 00:17:05.903087 | orchestrator | ++ INTERACTIVE=false 2026-04-11 00:17:05.903112 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 00:17:05.903129 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 00:17:05.903154 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 00:17:05.903166 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 00:17:05.903177 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 00:17:05.903188 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 00:17:05.903199 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 00:17:05.903210 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 00:17:05.903223 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 00:17:05.903234 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 00:17:05.903245 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 00:17:05.903256 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 00:17:05.903284 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 00:17:05.903302 | orchestrator | ++ export ARA=false 2026-04-11 00:17:05.903313 | orchestrator | ++ ARA=false 2026-04-11 00:17:05.903325 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 00:17:05.903336 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 00:17:05.903347 | orchestrator | ++ export TEMPEST=true 2026-04-11 00:17:05.903393 | orchestrator | ++ TEMPEST=true 2026-04-11 00:17:05.903408 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 00:17:05.903418 | orchestrator | ++ IS_ZUUL=true 2026-04-11 00:17:05.903430 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.75 2026-04-11 00:17:05.903441 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.75 2026-04-11 00:17:05.903458 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 00:17:05.903469 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 00:17:05.903480 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 00:17:05.903490 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 00:17:05.903501 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 00:17:05.903512 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 00:17:05.903523 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 00:17:05.903534 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 00:17:05.903545 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-11 00:17:05.954515 | orchestrator | + docker version 2026-04-11 00:17:06.078563 | orchestrator | Client: Docker Engine - Community 2026-04-11 00:17:06.078658 | orchestrator | Version: 27.5.1 2026-04-11 00:17:06.078672 | orchestrator | API version: 1.47 2026-04-11 00:17:06.078686 | orchestrator | Go version: go1.22.11 2026-04-11 00:17:06.078697 | orchestrator | Git commit: 9f9e405 2026-04-11 00:17:06.078709 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-11 00:17:06.078721 | orchestrator | OS/Arch: linux/amd64 2026-04-11 00:17:06.078732 | orchestrator | Context: default 2026-04-11 00:17:06.078744 | orchestrator | 2026-04-11 00:17:06.078755 | orchestrator | Server: Docker Engine - Community 2026-04-11 00:17:06.078767 | orchestrator | Engine: 2026-04-11 00:17:06.078778 | orchestrator | Version: 27.5.1 2026-04-11 00:17:06.078790 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-11 00:17:06.078834 | orchestrator | Go version: go1.22.11 2026-04-11 00:17:06.078846 | orchestrator | Git commit: 4c9b3b0 2026-04-11 00:17:06.078857 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-11 00:17:06.078868 | orchestrator | OS/Arch: linux/amd64 2026-04-11 00:17:06.078879 | orchestrator | Experimental: false 2026-04-11 00:17:06.078891 | orchestrator | containerd: 2026-04-11 00:17:06.078912 | orchestrator | Version: v2.2.2 2026-04-11 00:17:06.078924 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-11 00:17:06.078936 | orchestrator | runc: 2026-04-11 00:17:06.078947 | orchestrator | Version: 1.3.4 2026-04-11 00:17:06.078959 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-11 00:17:06.078970 | orchestrator | docker-init: 2026-04-11 00:17:06.078980 | orchestrator | Version: 0.19.0 2026-04-11 00:17:06.078992 | orchestrator | GitCommit: de40ad0 2026-04-11 00:17:06.081026 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-11 00:17:06.088662 | orchestrator | + set -e 2026-04-11 00:17:06.088723 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 00:17:06.088736 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 00:17:06.088749 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 00:17:06.088761 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 00:17:06.088772 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 00:17:06.088784 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 00:17:06.088796 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 00:17:06.088808 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 00:17:06.088820 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 00:17:06.088831 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 00:17:06.088842 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 00:17:06.088853 | orchestrator | ++ export ARA=false 2026-04-11 00:17:06.088865 | orchestrator | ++ ARA=false 2026-04-11 00:17:06.088877 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 00:17:06.088888 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 00:17:06.088900 | orchestrator | ++ export TEMPEST=true 2026-04-11 00:17:06.088911 | orchestrator | ++ TEMPEST=true 2026-04-11 00:17:06.088922 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 00:17:06.088933 | orchestrator | ++ IS_ZUUL=true 2026-04-11 00:17:06.088945 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.75 2026-04-11 00:17:06.088957 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.75 2026-04-11 00:17:06.088968 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 00:17:06.088979 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 00:17:06.088991 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 00:17:06.089002 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 00:17:06.089013 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 00:17:06.089025 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 00:17:06.089036 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 00:17:06.089047 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 00:17:06.089059 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 00:17:06.089070 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 00:17:06.089081 | orchestrator | ++ INTERACTIVE=false 2026-04-11 00:17:06.089092 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 00:17:06.089108 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 00:17:06.089129 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-11 00:17:06.089141 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-11 00:17:06.095505 | orchestrator | + set -e 2026-04-11 00:17:06.095967 | orchestrator | + VERSION=10.0.0 2026-04-11 00:17:06.095989 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-11 00:17:06.103012 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-11 00:17:06.103114 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-11 00:17:06.107694 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-11 00:17:06.110972 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-11 00:17:06.117417 | orchestrator | /opt/configuration ~ 2026-04-11 00:17:06.117490 | orchestrator | + set -e 2026-04-11 00:17:06.117503 | orchestrator | + pushd /opt/configuration 2026-04-11 00:17:06.117514 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 00:17:06.120332 | orchestrator | + source /opt/venv/bin/activate 2026-04-11 00:17:06.121035 | orchestrator | ++ deactivate nondestructive 2026-04-11 00:17:06.121114 | orchestrator | ++ '[' -n '' ']' 2026-04-11 00:17:06.121132 | orchestrator | ++ '[' -n '' ']' 2026-04-11 00:17:06.121176 | orchestrator | ++ hash -r 2026-04-11 00:17:06.121193 | orchestrator | ++ '[' -n '' ']' 2026-04-11 00:17:06.121203 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-11 00:17:06.121213 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-11 00:17:06.121223 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-11 00:17:06.121244 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-11 00:17:06.121254 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-11 00:17:06.121264 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-11 00:17:06.121274 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-11 00:17:06.121285 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 00:17:06.121295 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 00:17:06.121306 | orchestrator | ++ export PATH 2026-04-11 00:17:06.121316 | orchestrator | ++ '[' -n '' ']' 2026-04-11 00:17:06.121326 | orchestrator | ++ '[' -z '' ']' 2026-04-11 00:17:06.121335 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-11 00:17:06.121345 | orchestrator | ++ PS1='(venv) ' 2026-04-11 00:17:06.121375 | orchestrator | ++ export PS1 2026-04-11 00:17:06.121385 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-11 00:17:06.121395 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-11 00:17:06.121405 | orchestrator | ++ hash -r 2026-04-11 00:17:06.121415 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-11 00:17:07.135502 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-11 00:17:07.136116 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-11 00:17:07.137448 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-11 00:17:07.138862 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-11 00:17:07.139751 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-11 00:17:07.149641 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-11 00:17:07.150998 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-11 00:17:07.151859 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-11 00:17:07.153221 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-11 00:17:07.181353 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-11 00:17:07.182520 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-11 00:17:07.184240 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-11 00:17:07.185626 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-11 00:17:07.189689 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-11 00:17:07.387380 | orchestrator | ++ which gilt 2026-04-11 00:17:07.391583 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-11 00:17:07.391669 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-11 00:17:07.632280 | orchestrator | osism.cfg-generics: 2026-04-11 00:17:07.786856 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-11 00:17:07.786967 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-11 00:17:07.787207 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-11 00:17:07.787466 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-11 00:17:08.491644 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-11 00:17:08.501836 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-11 00:17:08.842671 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-11 00:17:08.889751 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 00:17:08.889826 | orchestrator | + deactivate 2026-04-11 00:17:08.889837 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-11 00:17:08.889846 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 00:17:08.889853 | orchestrator | + export PATH 2026-04-11 00:17:08.889861 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-11 00:17:08.889869 | orchestrator | + '[' -n '' ']' 2026-04-11 00:17:08.889877 | orchestrator | + hash -r 2026-04-11 00:17:08.889884 | orchestrator | + '[' -n '' ']' 2026-04-11 00:17:08.889891 | orchestrator | + unset VIRTUAL_ENV 2026-04-11 00:17:08.889898 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-11 00:17:08.889913 | orchestrator | ~ 2026-04-11 00:17:08.889921 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-11 00:17:08.889928 | orchestrator | + unset -f deactivate 2026-04-11 00:17:08.889935 | orchestrator | + popd 2026-04-11 00:17:08.892175 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-11 00:17:08.892234 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-11 00:17:08.892919 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-11 00:17:08.954961 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 00:17:08.955028 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-11 00:17:08.955893 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-11 00:17:09.038792 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 00:17:09.038879 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-11 00:17:09.044688 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-11 00:17:09.049547 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-11 00:17:09.145440 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 00:17:09.145623 | orchestrator | + source /opt/venv/bin/activate 2026-04-11 00:17:09.145649 | orchestrator | ++ deactivate nondestructive 2026-04-11 00:17:09.145662 | orchestrator | ++ '[' -n '' ']' 2026-04-11 00:17:09.145673 | orchestrator | ++ '[' -n '' ']' 2026-04-11 00:17:09.145684 | orchestrator | ++ hash -r 2026-04-11 00:17:09.145695 | orchestrator | ++ '[' -n '' ']' 2026-04-11 00:17:09.145720 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-11 00:17:09.145731 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-11 00:17:09.145742 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-11 00:17:09.145753 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-11 00:17:09.145765 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-11 00:17:09.145776 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-11 00:17:09.145807 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-11 00:17:09.145820 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 00:17:09.145832 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 00:17:09.146180 | orchestrator | ++ export PATH 2026-04-11 00:17:09.146199 | orchestrator | ++ '[' -n '' ']' 2026-04-11 00:17:09.146210 | orchestrator | ++ '[' -z '' ']' 2026-04-11 00:17:09.146222 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-11 00:17:09.146232 | orchestrator | ++ PS1='(venv) ' 2026-04-11 00:17:09.146243 | orchestrator | ++ export PS1 2026-04-11 00:17:09.146254 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-11 00:17:09.146265 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-11 00:17:09.146276 | orchestrator | ++ hash -r 2026-04-11 00:17:09.146288 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-11 00:17:10.266261 | orchestrator | 2026-04-11 00:17:10.266451 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-11 00:17:10.266472 | orchestrator | 2026-04-11 00:17:10.266484 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-11 00:17:10.820315 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:10.820457 | orchestrator | 2026-04-11 00:17:10.820486 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-11 00:17:11.752329 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:11.752533 | orchestrator | 2026-04-11 00:17:11.752563 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-11 00:17:11.752586 | orchestrator | 2026-04-11 00:17:11.752606 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:17:13.972214 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:13.972301 | orchestrator | 2026-04-11 00:17:13.972320 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-11 00:17:14.026509 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:14.026616 | orchestrator | 2026-04-11 00:17:14.026639 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-11 00:17:14.479098 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:14.479196 | orchestrator | 2026-04-11 00:17:14.479212 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-11 00:17:14.521410 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:17:14.521509 | orchestrator | 2026-04-11 00:17:14.521524 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-11 00:17:14.851544 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:14.851640 | orchestrator | 2026-04-11 00:17:14.851656 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-11 00:17:15.188536 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:15.188629 | orchestrator | 2026-04-11 00:17:15.188646 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-11 00:17:15.301197 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:17:15.301285 | orchestrator | 2026-04-11 00:17:15.301301 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-11 00:17:15.301313 | orchestrator | 2026-04-11 00:17:15.301325 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:17:17.064274 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:17.064426 | orchestrator | 2026-04-11 00:17:17.064451 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-11 00:17:17.162943 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-11 00:17:17.163031 | orchestrator | 2026-04-11 00:17:17.163047 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-11 00:17:17.211885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-11 00:17:17.211971 | orchestrator | 2026-04-11 00:17:17.211986 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-11 00:17:18.276542 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-11 00:17:18.276639 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-11 00:17:18.276655 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-11 00:17:18.276667 | orchestrator | 2026-04-11 00:17:18.276679 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-11 00:17:20.040892 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-11 00:17:20.040966 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-11 00:17:20.040979 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-11 00:17:20.040991 | orchestrator | 2026-04-11 00:17:20.041003 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-11 00:17:20.671975 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 00:17:20.672078 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:20.672096 | orchestrator | 2026-04-11 00:17:20.672110 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-11 00:17:21.300104 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 00:17:21.300231 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:21.300260 | orchestrator | 2026-04-11 00:17:21.300282 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-11 00:17:21.362113 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:17:21.362213 | orchestrator | 2026-04-11 00:17:21.362232 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-11 00:17:21.692581 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:21.692678 | orchestrator | 2026-04-11 00:17:21.692695 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-11 00:17:21.754409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-11 00:17:21.754547 | orchestrator | 2026-04-11 00:17:21.754574 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-11 00:17:22.819704 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:22.819787 | orchestrator | 2026-04-11 00:17:22.819802 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-11 00:17:23.613135 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:23.613223 | orchestrator | 2026-04-11 00:17:23.613257 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-11 00:17:33.942715 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:33.942906 | orchestrator | 2026-04-11 00:17:33.942924 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-11 00:17:33.984969 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:17:33.985058 | orchestrator | 2026-04-11 00:17:33.985073 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-11 00:17:33.985086 | orchestrator | 2026-04-11 00:17:33.985098 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:17:35.880125 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:35.880244 | orchestrator | 2026-04-11 00:17:35.880261 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-11 00:17:35.995223 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-11 00:17:35.995318 | orchestrator | 2026-04-11 00:17:35.995333 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-11 00:17:36.059270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 00:17:36.059411 | orchestrator | 2026-04-11 00:17:36.059438 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-11 00:17:38.681632 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:38.681734 | orchestrator | 2026-04-11 00:17:38.681750 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-11 00:17:38.718994 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:38.719087 | orchestrator | 2026-04-11 00:17:38.719102 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-11 00:17:38.840023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-11 00:17:38.840117 | orchestrator | 2026-04-11 00:17:38.840132 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-11 00:17:41.916809 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-11 00:17:41.916983 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-11 00:17:41.917003 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-11 00:17:41.917016 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-11 00:17:41.917027 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-11 00:17:41.917038 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-11 00:17:41.917052 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-11 00:17:41.917063 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-11 00:17:41.917074 | orchestrator | 2026-04-11 00:17:41.917087 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-11 00:17:42.576501 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:42.576613 | orchestrator | 2026-04-11 00:17:42.576630 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-11 00:17:43.220580 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:43.220662 | orchestrator | 2026-04-11 00:17:43.220675 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-11 00:17:43.291600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-11 00:17:43.292584 | orchestrator | 2026-04-11 00:17:43.292619 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-11 00:17:44.623637 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-11 00:17:44.623739 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-11 00:17:44.623754 | orchestrator | 2026-04-11 00:17:44.623767 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-11 00:17:45.257143 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:45.257243 | orchestrator | 2026-04-11 00:17:45.257260 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-11 00:17:45.318270 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:17:45.318430 | orchestrator | 2026-04-11 00:17:45.318462 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-11 00:17:45.419505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-11 00:17:45.419596 | orchestrator | 2026-04-11 00:17:45.419610 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-11 00:17:46.107190 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:46.107287 | orchestrator | 2026-04-11 00:17:46.107303 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-11 00:17:46.169970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-11 00:17:46.170159 | orchestrator | 2026-04-11 00:17:46.170179 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-11 00:17:47.609272 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 00:17:47.609374 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 00:17:47.609459 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:47.609474 | orchestrator | 2026-04-11 00:17:47.609487 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-11 00:17:48.266336 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:48.266490 | orchestrator | 2026-04-11 00:17:48.266512 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-11 00:17:48.327569 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:17:48.328484 | orchestrator | 2026-04-11 00:17:48.328531 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-11 00:17:48.420582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-11 00:17:48.420681 | orchestrator | 2026-04-11 00:17:48.420698 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-11 00:17:49.968715 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:49.968821 | orchestrator | 2026-04-11 00:17:49.968839 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-11 00:17:50.364367 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:50.364518 | orchestrator | 2026-04-11 00:17:50.364535 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-11 00:17:51.652683 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-11 00:17:51.652791 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-11 00:17:51.652807 | orchestrator | 2026-04-11 00:17:51.652822 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-11 00:17:52.361305 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:52.361470 | orchestrator | 2026-04-11 00:17:52.361489 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-11 00:17:52.761699 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:52.761811 | orchestrator | 2026-04-11 00:17:52.761828 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-11 00:17:53.136602 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:53.136693 | orchestrator | 2026-04-11 00:17:53.136706 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-11 00:17:53.187666 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:17:53.187762 | orchestrator | 2026-04-11 00:17:53.187807 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-11 00:17:53.259334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-11 00:17:53.259498 | orchestrator | 2026-04-11 00:17:53.259516 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-11 00:17:53.292233 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:53.292323 | orchestrator | 2026-04-11 00:17:53.292337 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-11 00:17:55.362162 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-11 00:17:55.362294 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-11 00:17:55.362322 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-11 00:17:55.362343 | orchestrator | 2026-04-11 00:17:55.362357 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-11 00:17:56.087870 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:56.087951 | orchestrator | 2026-04-11 00:17:56.087962 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-11 00:17:56.790893 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:56.790996 | orchestrator | 2026-04-11 00:17:56.791012 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-11 00:17:57.525289 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:57.525428 | orchestrator | 2026-04-11 00:17:57.525446 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-11 00:17:57.610376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-11 00:17:57.610554 | orchestrator | 2026-04-11 00:17:57.610571 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-11 00:17:57.658734 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:57.658848 | orchestrator | 2026-04-11 00:17:57.658871 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-11 00:17:58.342494 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-11 00:17:58.342597 | orchestrator | 2026-04-11 00:17:58.342614 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-11 00:17:58.417648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-11 00:17:58.417741 | orchestrator | 2026-04-11 00:17:58.417757 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-11 00:17:59.117440 | orchestrator | changed: [testbed-manager] 2026-04-11 00:17:59.117529 | orchestrator | 2026-04-11 00:17:59.117540 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-11 00:17:59.741124 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:59.741206 | orchestrator | 2026-04-11 00:17:59.741217 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-11 00:17:59.777890 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:17:59.777974 | orchestrator | 2026-04-11 00:17:59.777985 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-11 00:17:59.838337 | orchestrator | ok: [testbed-manager] 2026-04-11 00:17:59.838501 | orchestrator | 2026-04-11 00:17:59.838521 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-11 00:18:00.685887 | orchestrator | changed: [testbed-manager] 2026-04-11 00:18:00.685997 | orchestrator | 2026-04-11 00:18:00.686072 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-11 00:19:10.736625 | orchestrator | changed: [testbed-manager] 2026-04-11 00:19:10.736738 | orchestrator | 2026-04-11 00:19:10.736755 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-11 00:19:11.625784 | orchestrator | ok: [testbed-manager] 2026-04-11 00:19:11.625911 | orchestrator | 2026-04-11 00:19:11.625940 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-11 00:19:11.679254 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:19:11.679340 | orchestrator | 2026-04-11 00:19:11.679354 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-11 00:19:14.349551 | orchestrator | changed: [testbed-manager] 2026-04-11 00:19:14.349658 | orchestrator | 2026-04-11 00:19:14.349674 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-11 00:19:14.436611 | orchestrator | ok: [testbed-manager] 2026-04-11 00:19:14.436716 | orchestrator | 2026-04-11 00:19:14.436735 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-11 00:19:14.436748 | orchestrator | 2026-04-11 00:19:14.436760 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-11 00:19:14.484725 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:19:14.484821 | orchestrator | 2026-04-11 00:19:14.484836 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-11 00:20:14.528734 | orchestrator | Pausing for 60 seconds 2026-04-11 00:20:14.528850 | orchestrator | changed: [testbed-manager] 2026-04-11 00:20:14.528867 | orchestrator | 2026-04-11 00:20:14.528881 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-11 00:20:17.552872 | orchestrator | changed: [testbed-manager] 2026-04-11 00:20:17.552972 | orchestrator | 2026-04-11 00:20:17.553012 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-11 00:20:58.954670 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-11 00:20:58.954779 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-11 00:20:58.954794 | orchestrator | changed: [testbed-manager] 2026-04-11 00:20:58.954807 | orchestrator | 2026-04-11 00:20:58.954820 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-11 00:21:04.142739 | orchestrator | changed: [testbed-manager] 2026-04-11 00:21:04.142841 | orchestrator | 2026-04-11 00:21:04.142852 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-11 00:21:04.217341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-11 00:21:04.217431 | orchestrator | 2026-04-11 00:21:04.217444 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-11 00:21:04.217454 | orchestrator | 2026-04-11 00:21:04.217463 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-11 00:21:04.264882 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:21:04.264983 | orchestrator | 2026-04-11 00:21:04.265001 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-11 00:21:04.326565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-11 00:21:04.326700 | orchestrator | 2026-04-11 00:21:04.326729 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-11 00:21:05.025968 | orchestrator | changed: [testbed-manager] 2026-04-11 00:21:05.026126 | orchestrator | 2026-04-11 00:21:05.026146 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-11 00:21:07.880709 | orchestrator | ok: [testbed-manager] 2026-04-11 00:21:07.880828 | orchestrator | 2026-04-11 00:21:07.880849 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-11 00:21:07.949382 | orchestrator | ok: [testbed-manager] => { 2026-04-11 00:21:07.949536 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-11 00:21:07.949554 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-11 00:21:07.949566 | orchestrator | "Checking running containers against expected versions...", 2026-04-11 00:21:07.949580 | orchestrator | "", 2026-04-11 00:21:07.949592 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-11 00:21:07.949603 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-11 00:21:07.949616 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.949628 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-11 00:21:07.949639 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.949650 | orchestrator | "", 2026-04-11 00:21:07.949662 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-11 00:21:07.949698 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-11 00:21:07.949709 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.949721 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-11 00:21:07.949732 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.949743 | orchestrator | "", 2026-04-11 00:21:07.949754 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-11 00:21:07.949765 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-11 00:21:07.949776 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.949787 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-11 00:21:07.949798 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.949809 | orchestrator | "", 2026-04-11 00:21:07.949820 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-11 00:21:07.949834 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-11 00:21:07.949845 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.949856 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-11 00:21:07.949867 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.949878 | orchestrator | "", 2026-04-11 00:21:07.949889 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-11 00:21:07.949900 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-11 00:21:07.949911 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.949922 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-11 00:21:07.949933 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.949945 | orchestrator | "", 2026-04-11 00:21:07.949956 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-11 00:21:07.949967 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.949978 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.949989 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950000 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950011 | orchestrator | "", 2026-04-11 00:21:07.950076 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-11 00:21:07.950088 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-11 00:21:07.950098 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950109 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-11 00:21:07.950121 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950132 | orchestrator | "", 2026-04-11 00:21:07.950142 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-11 00:21:07.950153 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-11 00:21:07.950164 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950175 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-11 00:21:07.950185 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950196 | orchestrator | "", 2026-04-11 00:21:07.950207 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-11 00:21:07.950217 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-11 00:21:07.950228 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950239 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-11 00:21:07.950250 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950260 | orchestrator | "", 2026-04-11 00:21:07.950271 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-11 00:21:07.950282 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-11 00:21:07.950293 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950303 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-11 00:21:07.950314 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950325 | orchestrator | "", 2026-04-11 00:21:07.950336 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-11 00:21:07.950355 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950366 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950376 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950387 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950398 | orchestrator | "", 2026-04-11 00:21:07.950408 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-11 00:21:07.950419 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950430 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950440 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950451 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950462 | orchestrator | "", 2026-04-11 00:21:07.950473 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-11 00:21:07.950504 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950515 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950526 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950537 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950548 | orchestrator | "", 2026-04-11 00:21:07.950558 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-11 00:21:07.950569 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950580 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950599 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950640 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950653 | orchestrator | "", 2026-04-11 00:21:07.950664 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-11 00:21:07.950675 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950686 | orchestrator | " Enabled: true", 2026-04-11 00:21:07.950696 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 00:21:07.950707 | orchestrator | " Status: ✅ MATCH", 2026-04-11 00:21:07.950718 | orchestrator | "", 2026-04-11 00:21:07.950729 | orchestrator | "=== Summary ===", 2026-04-11 00:21:07.950739 | orchestrator | "Errors (version mismatches): 0", 2026-04-11 00:21:07.950750 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-11 00:21:07.950761 | orchestrator | "", 2026-04-11 00:21:07.950772 | orchestrator | "✅ All running containers match expected versions!" 2026-04-11 00:21:07.950783 | orchestrator | ] 2026-04-11 00:21:07.950794 | orchestrator | } 2026-04-11 00:21:07.950805 | orchestrator | 2026-04-11 00:21:07.950816 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-11 00:21:08.000441 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:21:08.000556 | orchestrator | 2026-04-11 00:21:08.000568 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:21:08.000579 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-11 00:21:08.000587 | orchestrator | 2026-04-11 00:21:08.081990 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 00:21:08.082139 | orchestrator | + deactivate 2026-04-11 00:21:08.082155 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-11 00:21:08.082168 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 00:21:08.082180 | orchestrator | + export PATH 2026-04-11 00:21:08.082191 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-11 00:21:08.082204 | orchestrator | + '[' -n '' ']' 2026-04-11 00:21:08.082215 | orchestrator | + hash -r 2026-04-11 00:21:08.082226 | orchestrator | + '[' -n '' ']' 2026-04-11 00:21:08.082237 | orchestrator | + unset VIRTUAL_ENV 2026-04-11 00:21:08.082247 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-11 00:21:08.082259 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-11 00:21:08.082270 | orchestrator | + unset -f deactivate 2026-04-11 00:21:08.082282 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-11 00:21:08.088930 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 00:21:08.089016 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-11 00:21:08.089029 | orchestrator | + local max_attempts=60 2026-04-11 00:21:08.089041 | orchestrator | + local name=ceph-ansible 2026-04-11 00:21:08.089053 | orchestrator | + local attempt_num=1 2026-04-11 00:21:08.090153 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:21:08.120724 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:21:08.120791 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-11 00:21:08.120806 | orchestrator | + local max_attempts=60 2026-04-11 00:21:08.120817 | orchestrator | + local name=kolla-ansible 2026-04-11 00:21:08.120829 | orchestrator | + local attempt_num=1 2026-04-11 00:21:08.121808 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-11 00:21:08.158184 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:21:08.158322 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-11 00:21:08.158348 | orchestrator | + local max_attempts=60 2026-04-11 00:21:08.158362 | orchestrator | + local name=osism-ansible 2026-04-11 00:21:08.158374 | orchestrator | + local attempt_num=1 2026-04-11 00:21:08.158793 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-11 00:21:08.196026 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:21:08.196112 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-11 00:21:08.196128 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-11 00:21:08.901851 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-11 00:21:09.093939 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-11 00:21:09.094008 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.094044 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.094051 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-11 00:21:09.094073 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-11 00:21:09.094079 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.094084 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.094089 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2026-04-11 00:21:09.094094 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.094099 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-11 00:21:09.094104 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.094108 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-11 00:21:09.094128 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.094133 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-11 00:21:09.094138 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.094143 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-11 00:21:09.101448 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-11 00:21:09.153301 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 00:21:09.153382 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-11 00:21:09.157096 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-11 00:21:21.586928 | orchestrator | 2026-04-11 00:21:21 | INFO  | Prepare task for execution of resolvconf. 2026-04-11 00:21:21.770822 | orchestrator | 2026-04-11 00:21:21 | INFO  | Task c4a7f2b8-2ac0-4b34-8d02-b37210df0814 (resolvconf) was prepared for execution. 2026-04-11 00:21:21.770942 | orchestrator | 2026-04-11 00:21:21 | INFO  | It takes a moment until task c4a7f2b8-2ac0-4b34-8d02-b37210df0814 (resolvconf) has been started and output is visible here. 2026-04-11 00:21:34.356486 | orchestrator | 2026-04-11 00:21:34.356593 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-11 00:21:34.356600 | orchestrator | 2026-04-11 00:21:34.356605 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:21:34.356609 | orchestrator | Saturday 11 April 2026 00:21:24 +0000 (0:00:00.131) 0:00:00.132 ******** 2026-04-11 00:21:34.356613 | orchestrator | ok: [testbed-manager] 2026-04-11 00:21:34.356618 | orchestrator | 2026-04-11 00:21:34.356623 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-11 00:21:34.356627 | orchestrator | Saturday 11 April 2026 00:21:27 +0000 (0:00:03.440) 0:00:03.572 ******** 2026-04-11 00:21:34.356632 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:21:34.356636 | orchestrator | 2026-04-11 00:21:34.356640 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-11 00:21:34.356644 | orchestrator | Saturday 11 April 2026 00:21:28 +0000 (0:00:00.061) 0:00:03.633 ******** 2026-04-11 00:21:34.356660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-11 00:21:34.356666 | orchestrator | 2026-04-11 00:21:34.356670 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-11 00:21:34.356674 | orchestrator | Saturday 11 April 2026 00:21:28 +0000 (0:00:00.081) 0:00:03.714 ******** 2026-04-11 00:21:34.356678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 00:21:34.356681 | orchestrator | 2026-04-11 00:21:34.356685 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-11 00:21:34.356689 | orchestrator | Saturday 11 April 2026 00:21:28 +0000 (0:00:00.085) 0:00:03.800 ******** 2026-04-11 00:21:34.356693 | orchestrator | ok: [testbed-manager] 2026-04-11 00:21:34.356697 | orchestrator | 2026-04-11 00:21:34.356700 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-11 00:21:34.356704 | orchestrator | Saturday 11 April 2026 00:21:29 +0000 (0:00:01.133) 0:00:04.934 ******** 2026-04-11 00:21:34.356708 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:21:34.356730 | orchestrator | 2026-04-11 00:21:34.356737 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-11 00:21:34.356744 | orchestrator | Saturday 11 April 2026 00:21:29 +0000 (0:00:00.062) 0:00:04.996 ******** 2026-04-11 00:21:34.356750 | orchestrator | ok: [testbed-manager] 2026-04-11 00:21:34.356756 | orchestrator | 2026-04-11 00:21:34.356762 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-11 00:21:34.356769 | orchestrator | Saturday 11 April 2026 00:21:29 +0000 (0:00:00.561) 0:00:05.558 ******** 2026-04-11 00:21:34.356775 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:21:34.356781 | orchestrator | 2026-04-11 00:21:34.356788 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-11 00:21:34.356795 | orchestrator | Saturday 11 April 2026 00:21:30 +0000 (0:00:00.082) 0:00:05.641 ******** 2026-04-11 00:21:34.356801 | orchestrator | changed: [testbed-manager] 2026-04-11 00:21:34.356808 | orchestrator | 2026-04-11 00:21:34.356813 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-11 00:21:34.356821 | orchestrator | Saturday 11 April 2026 00:21:30 +0000 (0:00:00.634) 0:00:06.275 ******** 2026-04-11 00:21:34.356824 | orchestrator | changed: [testbed-manager] 2026-04-11 00:21:34.356828 | orchestrator | 2026-04-11 00:21:34.356832 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-11 00:21:34.356836 | orchestrator | Saturday 11 April 2026 00:21:31 +0000 (0:00:01.135) 0:00:07.411 ******** 2026-04-11 00:21:34.356840 | orchestrator | ok: [testbed-manager] 2026-04-11 00:21:34.356844 | orchestrator | 2026-04-11 00:21:34.356851 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-11 00:21:34.356857 | orchestrator | Saturday 11 April 2026 00:21:32 +0000 (0:00:01.043) 0:00:08.454 ******** 2026-04-11 00:21:34.356864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-11 00:21:34.356870 | orchestrator | 2026-04-11 00:21:34.356876 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-11 00:21:34.356883 | orchestrator | Saturday 11 April 2026 00:21:32 +0000 (0:00:00.096) 0:00:08.551 ******** 2026-04-11 00:21:34.356889 | orchestrator | changed: [testbed-manager] 2026-04-11 00:21:34.356895 | orchestrator | 2026-04-11 00:21:34.356901 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:21:34.356908 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 00:21:34.356915 | orchestrator | 2026-04-11 00:21:34.356919 | orchestrator | 2026-04-11 00:21:34.356923 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:21:34.356927 | orchestrator | Saturday 11 April 2026 00:21:34 +0000 (0:00:01.200) 0:00:09.752 ******** 2026-04-11 00:21:34.356931 | orchestrator | =============================================================================== 2026-04-11 00:21:34.356935 | orchestrator | Gathering Facts --------------------------------------------------------- 3.44s 2026-04-11 00:21:34.356940 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.20s 2026-04-11 00:21:34.356946 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.14s 2026-04-11 00:21:34.356953 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2026-04-11 00:21:34.356959 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.04s 2026-04-11 00:21:34.356965 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.63s 2026-04-11 00:21:34.356986 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-04-11 00:21:34.356993 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-04-11 00:21:34.356999 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-04-11 00:21:34.357012 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-11 00:21:34.357016 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-11 00:21:34.357020 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-11 00:21:34.357024 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-11 00:21:34.591039 | orchestrator | + osism apply sshconfig 2026-04-11 00:21:45.967240 | orchestrator | 2026-04-11 00:21:45 | INFO  | Prepare task for execution of sshconfig. 2026-04-11 00:21:46.039901 | orchestrator | 2026-04-11 00:21:46 | INFO  | Task f3ea97a1-5a98-4ff9-95a1-d4e3f526b64f (sshconfig) was prepared for execution. 2026-04-11 00:21:46.039994 | orchestrator | 2026-04-11 00:21:46 | INFO  | It takes a moment until task f3ea97a1-5a98-4ff9-95a1-d4e3f526b64f (sshconfig) has been started and output is visible here. 2026-04-11 00:21:56.531227 | orchestrator | 2026-04-11 00:21:56.531332 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-11 00:21:56.531349 | orchestrator | 2026-04-11 00:21:56.531361 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-11 00:21:56.531373 | orchestrator | Saturday 11 April 2026 00:21:49 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-04-11 00:21:56.531384 | orchestrator | ok: [testbed-manager] 2026-04-11 00:21:56.531396 | orchestrator | 2026-04-11 00:21:56.531407 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-11 00:21:56.531418 | orchestrator | Saturday 11 April 2026 00:21:50 +0000 (0:00:00.932) 0:00:01.119 ******** 2026-04-11 00:21:56.531429 | orchestrator | changed: [testbed-manager] 2026-04-11 00:21:56.531440 | orchestrator | 2026-04-11 00:21:56.531451 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-11 00:21:56.531462 | orchestrator | Saturday 11 April 2026 00:21:50 +0000 (0:00:00.480) 0:00:01.600 ******** 2026-04-11 00:21:56.531473 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-11 00:21:56.531484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-11 00:21:56.531495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-11 00:21:56.531506 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-11 00:21:56.531516 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-11 00:21:56.531527 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-11 00:21:56.531607 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-11 00:21:56.531620 | orchestrator | 2026-04-11 00:21:56.531631 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-11 00:21:56.531642 | orchestrator | Saturday 11 April 2026 00:21:55 +0000 (0:00:05.152) 0:00:06.752 ******** 2026-04-11 00:21:56.531653 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:21:56.531663 | orchestrator | 2026-04-11 00:21:56.531674 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-11 00:21:56.531685 | orchestrator | Saturday 11 April 2026 00:21:55 +0000 (0:00:00.086) 0:00:06.839 ******** 2026-04-11 00:21:56.531696 | orchestrator | changed: [testbed-manager] 2026-04-11 00:21:56.531707 | orchestrator | 2026-04-11 00:21:56.531718 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:21:56.531729 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:21:56.531741 | orchestrator | 2026-04-11 00:21:56.531752 | orchestrator | 2026-04-11 00:21:56.531763 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:21:56.531774 | orchestrator | Saturday 11 April 2026 00:21:56 +0000 (0:00:00.478) 0:00:07.318 ******** 2026-04-11 00:21:56.531785 | orchestrator | =============================================================================== 2026-04-11 00:21:56.531795 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.15s 2026-04-11 00:21:56.531835 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.93s 2026-04-11 00:21:56.531847 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2026-04-11 00:21:56.531858 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.48s 2026-04-11 00:21:56.531869 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-04-11 00:21:56.647260 | orchestrator | + osism apply known-hosts 2026-04-11 00:22:07.795863 | orchestrator | 2026-04-11 00:22:07 | INFO  | Prepare task for execution of known-hosts. 2026-04-11 00:22:07.861108 | orchestrator | 2026-04-11 00:22:07 | INFO  | Task 70f19b95-e362-4568-b8ee-0c5843d6abcf (known-hosts) was prepared for execution. 2026-04-11 00:22:07.861203 | orchestrator | 2026-04-11 00:22:07 | INFO  | It takes a moment until task 70f19b95-e362-4568-b8ee-0c5843d6abcf (known-hosts) has been started and output is visible here. 2026-04-11 00:22:23.077959 | orchestrator | 2026-04-11 00:22:23.078134 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-11 00:22:23.078165 | orchestrator | 2026-04-11 00:22:23.078177 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-11 00:22:23.078190 | orchestrator | Saturday 11 April 2026 00:22:10 +0000 (0:00:00.175) 0:00:00.175 ******** 2026-04-11 00:22:23.078202 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-11 00:22:23.078214 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-11 00:22:23.078226 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-11 00:22:23.078236 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-11 00:22:23.078247 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-11 00:22:23.078258 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-11 00:22:23.078269 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-11 00:22:23.078280 | orchestrator | 2026-04-11 00:22:23.078291 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-11 00:22:23.078303 | orchestrator | Saturday 11 April 2026 00:22:17 +0000 (0:00:06.412) 0:00:06.587 ******** 2026-04-11 00:22:23.078315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-11 00:22:23.078328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-11 00:22:23.078339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-11 00:22:23.078360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-11 00:22:23.078372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-11 00:22:23.078383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-11 00:22:23.078393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-11 00:22:23.078404 | orchestrator | 2026-04-11 00:22:23.078415 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:23.078426 | orchestrator | Saturday 11 April 2026 00:22:17 +0000 (0:00:00.194) 0:00:06.782 ******** 2026-04-11 00:22:23.078463 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHnxatS5MpismQ3XO09SwS3eEhVnyzuuNsXR0Kd5LxeJWlSqc0b6hZjs6GzA6LFHvEuuhAlXq4S0EW1lgjcg9FQ=) 2026-04-11 00:22:23.078485 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs1P1+/+SDQTKMv6koxKgwyhhdeQ91MFKuyWsfmdg6xFB4IZnpmYI71+BniWt8Knu/QHdKWbQlDDqQcSOG1rngL+aKiVC0QTiarxjh797EtVhO8QkP4einYp3RI/sSMLyRi0QnQV6N0a13LDJAEdm1k+OcVMX4zl7qqx3iRPStaPalvO/VoRetSDnvM/QY5X5YGjJ/YYhxV2Za5yytE7AFK/URqtcJpl89I8BguXUgtrnhxXp0sdjvYm9yHkFZUDt8UTIMbLdBGQCA8OBSUeQmPl+j8h6ba3gRA4biC+1Xv8mO6nMHX9dRLd+1CgNkG2Kw0QxbgH0kmPbucutbP3ZEZVpSCO6zhJHT4/TzlLnEe/Kf+spRMR9ycVXfxe7ziP2MGXRpfqyg6OU7zsoG0T0pzcfWsNLHYJSspuludyymJvmn/zp5bqJ5haOYWs/ciEFiIvK7LsLidLTCJMeOG/Wa1ewbH84xRa+t/b731Z9NCU8BI22nHRwRl40mF1eTn8M=) 2026-04-11 00:22:23.078503 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMZM9HavcWASfrRTkkh3gNgi5OWPQMLpuggWPbUMYXv) 2026-04-11 00:22:23.078519 | orchestrator | 2026-04-11 00:22:23.078532 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:23.078545 | orchestrator | Saturday 11 April 2026 00:22:18 +0000 (0:00:01.318) 0:00:08.100 ******** 2026-04-11 00:22:23.078558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCkJYikdftg9ffmzR4/Js/4SRe41Z39cYoSkGef6DHxhuLYC96XCkxPfACqbsQbA+oFYxndciBdWPARhWD5scBw=) 2026-04-11 00:22:23.078599 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZS0H/xK06ZIp6/iOGOuYcl1xn5EV4KZED3sF5SjYFS4y7ur8UuPkODb52p1veRhHwxLjFxaM5fV0NsXOXb+G8/XJr2bOwutYJX0k8diyBF2aqmTkEXwqem3xNYTGAQ3xBn9iaiHBsYQ8cTzhf6v6Yhr3hgNl6vAM/3brKt0FsJuk6cvXwraLzsh1ihzeFuAOuS32GR2iMRIEdhnB+WG208t0yKHZnDIKONdD+Y/IiRf2wJqiiWj+b6zGbktUvS6HVzOA6qH68K8RXPVGMfBpkshs/4ngxznsviszaFBhxdGhZpBBPBiojwwbV80qZu0teSZv8kOkCakCCq6bwK4dNUrQHLZ9TJWgQLRXBp08O//fZJt3vPHl950mA4KB0FcDK2cLc2lLcjaOLHR5kSFftOstRvLC4krXodakbtHB8W0g4N/NEoedO5M65TR0YtsEY2wmqVuXxVXPviJjFFuKW8FXXxZWTVdTruemQOZFXmPwFa/KV9LviRXye5brMz1s=) 2026-04-11 00:22:23.078614 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINlryP/PIOrW/YUhnUky+qpiaedQ11KXRiTP8wLsqv4u) 2026-04-11 00:22:23.078626 | orchestrator | 2026-04-11 00:22:23.078668 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:23.078681 | orchestrator | Saturday 11 April 2026 00:22:19 +0000 (0:00:01.061) 0:00:09.161 ******** 2026-04-11 00:22:23.078695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxn8vcsMEvSiQqRoMR/HGNL+TBAAbvubsu6SRhVRrxh7A5PT5MzHHjNGee0o+WlJopd6gKmyypEmCAc0dSf1TPpExuE5/Jv+K8GvEpLpd45QBnBk4Ie0ptwqSpvJIU4kdRwy8kvPkyNkcsV/W7iPQsLgeobq4Q1wWfDMHlQbw76L+NQMKyHSi/7l+c6oz3XM07amtNlEjwzDdE7J/sWTnNnJX8owCjNRiXk2mZfquRylhvRHNiKGLYVwsfiKhBBeqR0pVl+JkC5RvLzADGx3vJEEBDHeE8KL1P8FWzcKbeGzMHdKHgvC8EGGfXvdzHZUQZNcIM60hLHni1TqwjojeubFj9jB+I2+tLon6U4YovtUzJhBAqd3BDTf/ntoWrLi06fKFrNUg3JNIZ+GDrGRYhcVb9WBCydpCh1nUqC/978RTA9/0VPXU02ZhgVB0SHVbZ4Mg+quZ5ZTPvHGrk3bU2QLfPVlUrbfNgRvVTKmiZUUBjXHK3M9fh+1BTSR6ZGfU=) 2026-04-11 00:22:23.078709 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEoPyynoO4R0ccG7D3Il7zYxAR139IYm0XceDi8PRCHiQYs+Ek//DQ2fAYUhJzdvJULnDjcP6F9ohIjCpbpJKfk=) 2026-04-11 00:22:23.078722 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtWTOhSYDO1SZ5FX5cfv79Q8Uu/PSe7QqCoMF02vEGR) 2026-04-11 00:22:23.078735 | orchestrator | 2026-04-11 00:22:23.078748 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:23.078762 | orchestrator | Saturday 11 April 2026 00:22:20 +0000 (0:00:01.029) 0:00:10.191 ******** 2026-04-11 00:22:23.078775 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCsPX1rMgLE3UC6R3Q0bmg33L3p8BQyEFlpkORTo5Yb0sh9DcnFlbE81BpYxRP4XZTCoVUfBfKlQnyMyFyJqnG0WtqBqQsDn0GPd7zKHMLzhLbJlmglDMN/ukXX3KJKgMpXrxORbvoyoXkKTLub8g5c6Fgo4BIHYVRG5tl+hgvzq793uZWMy+/tvg8GNUthlDCyXG4gHf8Eakm3W5DTC+orzHJCRxC23NpxA2qx/dDrQ7mEZFPALZoDXvErhSEHey25LylbqY9u7N0KYD6k4WmzyqghrZDoHC/eLUCFvFFbfCi8+v3i47ltQ2jFZI5pZJftJyQ+wvW3PdSxkJ40AAgXluOIdlgTvYph6fwfZxU51/6d4WZq5hmHAI+9ve34fv+iRuJ1hk/Aw9phcZ6P5PRq+X6OXOWfFogP5SBhQQ4zvak//+J/aWeNLfCbABkmRc3SCdzsTY6Ni9cWFDgaf14bSElrZ9+X+NhcSyERbOkQ0qg4znROk+VtnSEA4+YrUU=) 2026-04-11 00:22:23.078797 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE6OT/VdlDcu1KHdp1cOluRZ393gQG6K1mqKnVDGGEmHPCtOTWegL97Fd3a4LQbXg8yhhAMF/3RS7+AblVbRQW0=) 2026-04-11 00:22:23.078888 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEnVWN0VGnUUMf5/D3EOnaBLZ0OwrC5XZtG6HsZRVmpP) 2026-04-11 00:22:23.078900 | orchestrator | 2026-04-11 00:22:23.078911 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:23.078922 | orchestrator | Saturday 11 April 2026 00:22:21 +0000 (0:00:01.026) 0:00:11.218 ******** 2026-04-11 00:22:23.078941 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCh1OndFv3LKYR0YxoUOgkTOD8p8PmzgIOGVX59Wkee/BPSPuVzx+Sjedf3dksPZlutDAK35vin/2W+AYeCH9cg5ddo9ic39xUu1bQHzBBgQL1CODJTbmg5/FCnH6YtSVGrtQxsCEvpIO8mWviyTVZ/fpqTdLuJLFAFQRUDUyAiUZqXmSO1ZfUorp3bMn+ijEIh3WKt23SsGToQudeko2lTAoshBRs2RjOXBivmR/Anl9gfn1HSl9wzMXFxCz8jmlvVrom1MSahSJ9+FvBVgf26WP1DWQHInEA/zataKz59BNhCvCE+TjbeygKqt8pCnc8+mW81f/erm3Z2PuX1+IKYFtDmXrP4tLS2dPJo3SxuPHWuBsM7FjJtr8esJW23s4S/MPhhRWoGHMfzG4L7PxV1W2xAd+4yy2lseKWszucSbw0//gPNGUgbLl4s3hC7tn7Q1fAOrpPPTh2PuOV7oXaG+btS9/PJPM4T8s69+DHx+XywittkJEZ84lCPAA1fyE=) 2026-04-11 00:22:23.078964 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMqX1QQFMDempiNPbafBpxpf2qlIo/5YlquHpGVDRwb2DisE+kF/A07T/VFTmfhcyp64WeAKu2H1SjGQhOCUmE=) 2026-04-11 00:22:23.078992 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO3wYmO9m9/OIOk65qjFVB86n4yzeMGldLam1VW6lMiC) 2026-04-11 00:22:23.079010 | orchestrator | 2026-04-11 00:22:23.079028 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:23.079046 | orchestrator | Saturday 11 April 2026 00:22:22 +0000 (0:00:01.003) 0:00:12.222 ******** 2026-04-11 00:22:23.079074 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK8v7NYabd45HJMXs86eJGGIq9UX8bXGUaWwBMSa6fIs) 2026-04-11 00:22:33.778988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8L27vlzwbwy7B2RMUl/6PbTaP0KwUPfZHjGUfcZR3O5o/Rlx9RqSxQs/O2GAHZS/PXWBAoOWVNqr22BtGk5EGtFbck0Y1J6KY9sZxD8K8OtTncQ/9xdDL9gy67oCFY1rM9rSbSj5LyuVK+1AUmv3ZR+cNOrmzAnJelUjeYEbYOSHdji7Hrjn5kA3j+Xw6SGobSI+WPZtBFlMrbRS+Xob6hUU9jyrvRtAv5RReDtErJtNcST0S+T71UpyDZh9FiZ5OzpGDbLKwBuJU3qGjLMOYglU6lONCwSCfP7ElMJ299coHry7sZRlyt577vGX3x444kUVN8/hxfB+vrsrmNwRwXmqJrKUUAK6YbS74ZjCNloKZ1HNnPCPzOAKG6mcBLvNh00IkyRWGaA9+7Z41sV2N9jUbj3oNBs97KF+CDgaVZU1xYTCPK4ILbVTQcu22oNbWllIOOtpHga75fHbVRkrbUf7NKs3PWlO13i3DcRS7TUiJ1Eps2V0GHApahyJSobk=) 2026-04-11 00:22:33.779100 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIpjUcD/H4YLi21fvPl2vFVa+iwzrNS5mnFGLLr3et/SmbcOEGR13B3p4lqOHoIUKqBAn165mMU38hw6kiMN550=) 2026-04-11 00:22:33.779119 | orchestrator | 2026-04-11 00:22:33.779133 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:33.779146 | orchestrator | Saturday 11 April 2026 00:22:23 +0000 (0:00:01.017) 0:00:13.239 ******** 2026-04-11 00:22:33.779158 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJwldDw8FySgXIxrsFW9DgeZPcBTt8PzrEXQipus4bbHPypjWtkqYoO04cF8e6eP0SPEN76UQHwjvYAVIOaRCA9iUweJFYrycpkyXev71jOEcZ0EeGdMJc4Sug7f59NhQ8u+SP3wE/zNOvDgz8Dz/7aycgsPaB5N9lLSgN9Hi7FUfZmcO/OGhqC3XrHKM6Zh6m4iZKcQ7w1x7M+uZfH1uJ3Bt3GOQFIvm7Q5QMPSJjs6gCBz8YUA0BWqqcd3mUFB4EnQJZnq7qRWcHWE51YsA/mHmQLh/fu1KGPhaP3yyuv0w2vuKgUJbfmB5zUUP++fnUuHq9VWfjgpFXIqNlaGxnRZ8CnEuprX0jJHQbq+hxxau6T7cDMyt3W5Zcd3PM/n7sOwMHxJYUyZiU5NvwoA65UK+g+7BVGXquodjUphUvLQi3O3nGXErGM68lwIVvO92X2IpduayzsrNuGVu4kkgE+4XN9UJhQAvet6L4TVYMbp7tl94SKhrS9VURh8y/ps=) 2026-04-11 00:22:33.779194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH0lJCm9m8My5WACVwoeexEgOsuv7EjSw6R1FV3vbmxlO2kVk370jgPyCcZa11xaW0jfADcm1w/s+plYxt5veWU=) 2026-04-11 00:22:33.779206 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAGpZTP0o083GhaAFf8Q9CDpUAlMrGHdUiFr+KQH/8KQ) 2026-04-11 00:22:33.779218 | orchestrator | 2026-04-11 00:22:33.779230 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-11 00:22:33.779241 | orchestrator | Saturday 11 April 2026 00:22:24 +0000 (0:00:01.039) 0:00:14.279 ******** 2026-04-11 00:22:33.779253 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-11 00:22:33.779264 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-11 00:22:33.779275 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-11 00:22:33.779286 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-11 00:22:33.779296 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-11 00:22:33.779307 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-11 00:22:33.779318 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-11 00:22:33.779328 | orchestrator | 2026-04-11 00:22:33.779339 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-11 00:22:33.779351 | orchestrator | Saturday 11 April 2026 00:22:30 +0000 (0:00:05.258) 0:00:19.537 ******** 2026-04-11 00:22:33.779363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-11 00:22:33.779376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-11 00:22:33.779387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-11 00:22:33.779398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-11 00:22:33.779409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-11 00:22:33.779420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-11 00:22:33.779430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-11 00:22:33.779441 | orchestrator | 2026-04-11 00:22:33.779468 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:33.779479 | orchestrator | Saturday 11 April 2026 00:22:30 +0000 (0:00:00.170) 0:00:19.708 ******** 2026-04-11 00:22:33.779508 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHnxatS5MpismQ3XO09SwS3eEhVnyzuuNsXR0Kd5LxeJWlSqc0b6hZjs6GzA6LFHvEuuhAlXq4S0EW1lgjcg9FQ=) 2026-04-11 00:22:33.779531 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs1P1+/+SDQTKMv6koxKgwyhhdeQ91MFKuyWsfmdg6xFB4IZnpmYI71+BniWt8Knu/QHdKWbQlDDqQcSOG1rngL+aKiVC0QTiarxjh797EtVhO8QkP4einYp3RI/sSMLyRi0QnQV6N0a13LDJAEdm1k+OcVMX4zl7qqx3iRPStaPalvO/VoRetSDnvM/QY5X5YGjJ/YYhxV2Za5yytE7AFK/URqtcJpl89I8BguXUgtrnhxXp0sdjvYm9yHkFZUDt8UTIMbLdBGQCA8OBSUeQmPl+j8h6ba3gRA4biC+1Xv8mO6nMHX9dRLd+1CgNkG2Kw0QxbgH0kmPbucutbP3ZEZVpSCO6zhJHT4/TzlLnEe/Kf+spRMR9ycVXfxe7ziP2MGXRpfqyg6OU7zsoG0T0pzcfWsNLHYJSspuludyymJvmn/zp5bqJ5haOYWs/ciEFiIvK7LsLidLTCJMeOG/Wa1ewbH84xRa+t/b731Z9NCU8BI22nHRwRl40mF1eTn8M=) 2026-04-11 00:22:33.779553 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMZM9HavcWASfrRTkkh3gNgi5OWPQMLpuggWPbUMYXv) 2026-04-11 00:22:33.779567 | orchestrator | 2026-04-11 00:22:33.779580 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:33.779592 | orchestrator | Saturday 11 April 2026 00:22:31 +0000 (0:00:01.085) 0:00:20.795 ******** 2026-04-11 00:22:33.779606 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZS0H/xK06ZIp6/iOGOuYcl1xn5EV4KZED3sF5SjYFS4y7ur8UuPkODb52p1veRhHwxLjFxaM5fV0NsXOXb+G8/XJr2bOwutYJX0k8diyBF2aqmTkEXwqem3xNYTGAQ3xBn9iaiHBsYQ8cTzhf6v6Yhr3hgNl6vAM/3brKt0FsJuk6cvXwraLzsh1ihzeFuAOuS32GR2iMRIEdhnB+WG208t0yKHZnDIKONdD+Y/IiRf2wJqiiWj+b6zGbktUvS6HVzOA6qH68K8RXPVGMfBpkshs/4ngxznsviszaFBhxdGhZpBBPBiojwwbV80qZu0teSZv8kOkCakCCq6bwK4dNUrQHLZ9TJWgQLRXBp08O//fZJt3vPHl950mA4KB0FcDK2cLc2lLcjaOLHR5kSFftOstRvLC4krXodakbtHB8W0g4N/NEoedO5M65TR0YtsEY2wmqVuXxVXPviJjFFuKW8FXXxZWTVdTruemQOZFXmPwFa/KV9LviRXye5brMz1s=) 2026-04-11 00:22:33.779620 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINlryP/PIOrW/YUhnUky+qpiaedQ11KXRiTP8wLsqv4u) 2026-04-11 00:22:33.779632 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCkJYikdftg9ffmzR4/Js/4SRe41Z39cYoSkGef6DHxhuLYC96XCkxPfACqbsQbA+oFYxndciBdWPARhWD5scBw=) 2026-04-11 00:22:33.779645 | orchestrator | 2026-04-11 00:22:33.779657 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:33.779713 | orchestrator | Saturday 11 April 2026 00:22:32 +0000 (0:00:01.046) 0:00:21.842 ******** 2026-04-11 00:22:33.779729 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxn8vcsMEvSiQqRoMR/HGNL+TBAAbvubsu6SRhVRrxh7A5PT5MzHHjNGee0o+WlJopd6gKmyypEmCAc0dSf1TPpExuE5/Jv+K8GvEpLpd45QBnBk4Ie0ptwqSpvJIU4kdRwy8kvPkyNkcsV/W7iPQsLgeobq4Q1wWfDMHlQbw76L+NQMKyHSi/7l+c6oz3XM07amtNlEjwzDdE7J/sWTnNnJX8owCjNRiXk2mZfquRylhvRHNiKGLYVwsfiKhBBeqR0pVl+JkC5RvLzADGx3vJEEBDHeE8KL1P8FWzcKbeGzMHdKHgvC8EGGfXvdzHZUQZNcIM60hLHni1TqwjojeubFj9jB+I2+tLon6U4YovtUzJhBAqd3BDTf/ntoWrLi06fKFrNUg3JNIZ+GDrGRYhcVb9WBCydpCh1nUqC/978RTA9/0VPXU02ZhgVB0SHVbZ4Mg+quZ5ZTPvHGrk3bU2QLfPVlUrbfNgRvVTKmiZUUBjXHK3M9fh+1BTSR6ZGfU=) 2026-04-11 00:22:33.779742 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEoPyynoO4R0ccG7D3Il7zYxAR139IYm0XceDi8PRCHiQYs+Ek//DQ2fAYUhJzdvJULnDjcP6F9ohIjCpbpJKfk=) 2026-04-11 00:22:33.779757 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtWTOhSYDO1SZ5FX5cfv79Q8Uu/PSe7QqCoMF02vEGR) 2026-04-11 00:22:33.779769 | orchestrator | 2026-04-11 00:22:33.779781 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:33.779793 | orchestrator | Saturday 11 April 2026 00:22:33 +0000 (0:00:01.075) 0:00:22.917 ******** 2026-04-11 00:22:33.779815 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCsPX1rMgLE3UC6R3Q0bmg33L3p8BQyEFlpkORTo5Yb0sh9DcnFlbE81BpYxRP4XZTCoVUfBfKlQnyMyFyJqnG0WtqBqQsDn0GPd7zKHMLzhLbJlmglDMN/ukXX3KJKgMpXrxORbvoyoXkKTLub8g5c6Fgo4BIHYVRG5tl+hgvzq793uZWMy+/tvg8GNUthlDCyXG4gHf8Eakm3W5DTC+orzHJCRxC23NpxA2qx/dDrQ7mEZFPALZoDXvErhSEHey25LylbqY9u7N0KYD6k4WmzyqghrZDoHC/eLUCFvFFbfCi8+v3i47ltQ2jFZI5pZJftJyQ+wvW3PdSxkJ40AAgXluOIdlgTvYph6fwfZxU51/6d4WZq5hmHAI+9ve34fv+iRuJ1hk/Aw9phcZ6P5PRq+X6OXOWfFogP5SBhQQ4zvak//+J/aWeNLfCbABkmRc3SCdzsTY6Ni9cWFDgaf14bSElrZ9+X+NhcSyERbOkQ0qg4znROk+VtnSEA4+YrUU=) 2026-04-11 00:22:39.532905 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE6OT/VdlDcu1KHdp1cOluRZ393gQG6K1mqKnVDGGEmHPCtOTWegL97Fd3a4LQbXg8yhhAMF/3RS7+AblVbRQW0=) 2026-04-11 00:22:39.533019 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEnVWN0VGnUUMf5/D3EOnaBLZ0OwrC5XZtG6HsZRVmpP) 2026-04-11 00:22:39.533035 | orchestrator | 2026-04-11 00:22:39.533049 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:39.533062 | orchestrator | Saturday 11 April 2026 00:22:35 +0000 (0:00:02.027) 0:00:24.945 ******** 2026-04-11 00:22:39.533073 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO3wYmO9m9/OIOk65qjFVB86n4yzeMGldLam1VW6lMiC) 2026-04-11 00:22:39.533086 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCh1OndFv3LKYR0YxoUOgkTOD8p8PmzgIOGVX59Wkee/BPSPuVzx+Sjedf3dksPZlutDAK35vin/2W+AYeCH9cg5ddo9ic39xUu1bQHzBBgQL1CODJTbmg5/FCnH6YtSVGrtQxsCEvpIO8mWviyTVZ/fpqTdLuJLFAFQRUDUyAiUZqXmSO1ZfUorp3bMn+ijEIh3WKt23SsGToQudeko2lTAoshBRs2RjOXBivmR/Anl9gfn1HSl9wzMXFxCz8jmlvVrom1MSahSJ9+FvBVgf26WP1DWQHInEA/zataKz59BNhCvCE+TjbeygKqt8pCnc8+mW81f/erm3Z2PuX1+IKYFtDmXrP4tLS2dPJo3SxuPHWuBsM7FjJtr8esJW23s4S/MPhhRWoGHMfzG4L7PxV1W2xAd+4yy2lseKWszucSbw0//gPNGUgbLl4s3hC7tn7Q1fAOrpPPTh2PuOV7oXaG+btS9/PJPM4T8s69+DHx+XywittkJEZ84lCPAA1fyE=) 2026-04-11 00:22:39.533101 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMqX1QQFMDempiNPbafBpxpf2qlIo/5YlquHpGVDRwb2DisE+kF/A07T/VFTmfhcyp64WeAKu2H1SjGQhOCUmE=) 2026-04-11 00:22:39.533112 | orchestrator | 2026-04-11 00:22:39.533124 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:39.533135 | orchestrator | Saturday 11 April 2026 00:22:36 +0000 (0:00:01.025) 0:00:25.970 ******** 2026-04-11 00:22:39.533147 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8L27vlzwbwy7B2RMUl/6PbTaP0KwUPfZHjGUfcZR3O5o/Rlx9RqSxQs/O2GAHZS/PXWBAoOWVNqr22BtGk5EGtFbck0Y1J6KY9sZxD8K8OtTncQ/9xdDL9gy67oCFY1rM9rSbSj5LyuVK+1AUmv3ZR+cNOrmzAnJelUjeYEbYOSHdji7Hrjn5kA3j+Xw6SGobSI+WPZtBFlMrbRS+Xob6hUU9jyrvRtAv5RReDtErJtNcST0S+T71UpyDZh9FiZ5OzpGDbLKwBuJU3qGjLMOYglU6lONCwSCfP7ElMJ299coHry7sZRlyt577vGX3x444kUVN8/hxfB+vrsrmNwRwXmqJrKUUAK6YbS74ZjCNloKZ1HNnPCPzOAKG6mcBLvNh00IkyRWGaA9+7Z41sV2N9jUbj3oNBs97KF+CDgaVZU1xYTCPK4ILbVTQcu22oNbWllIOOtpHga75fHbVRkrbUf7NKs3PWlO13i3DcRS7TUiJ1Eps2V0GHApahyJSobk=) 2026-04-11 00:22:39.533159 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIpjUcD/H4YLi21fvPl2vFVa+iwzrNS5mnFGLLr3et/SmbcOEGR13B3p4lqOHoIUKqBAn165mMU38hw6kiMN550=) 2026-04-11 00:22:39.533190 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK8v7NYabd45HJMXs86eJGGIq9UX8bXGUaWwBMSa6fIs) 2026-04-11 00:22:39.533201 | orchestrator | 2026-04-11 00:22:39.533213 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 00:22:39.533225 | orchestrator | Saturday 11 April 2026 00:22:37 +0000 (0:00:01.028) 0:00:26.999 ******** 2026-04-11 00:22:39.533236 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJwldDw8FySgXIxrsFW9DgeZPcBTt8PzrEXQipus4bbHPypjWtkqYoO04cF8e6eP0SPEN76UQHwjvYAVIOaRCA9iUweJFYrycpkyXev71jOEcZ0EeGdMJc4Sug7f59NhQ8u+SP3wE/zNOvDgz8Dz/7aycgsPaB5N9lLSgN9Hi7FUfZmcO/OGhqC3XrHKM6Zh6m4iZKcQ7w1x7M+uZfH1uJ3Bt3GOQFIvm7Q5QMPSJjs6gCBz8YUA0BWqqcd3mUFB4EnQJZnq7qRWcHWE51YsA/mHmQLh/fu1KGPhaP3yyuv0w2vuKgUJbfmB5zUUP++fnUuHq9VWfjgpFXIqNlaGxnRZ8CnEuprX0jJHQbq+hxxau6T7cDMyt3W5Zcd3PM/n7sOwMHxJYUyZiU5NvwoA65UK+g+7BVGXquodjUphUvLQi3O3nGXErGM68lwIVvO92X2IpduayzsrNuGVu4kkgE+4XN9UJhQAvet6L4TVYMbp7tl94SKhrS9VURh8y/ps=) 2026-04-11 00:22:39.533268 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH0lJCm9m8My5WACVwoeexEgOsuv7EjSw6R1FV3vbmxlO2kVk370jgPyCcZa11xaW0jfADcm1w/s+plYxt5veWU=) 2026-04-11 00:22:39.533280 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAGpZTP0o083GhaAFf8Q9CDpUAlMrGHdUiFr+KQH/8KQ) 2026-04-11 00:22:39.533291 | orchestrator | 2026-04-11 00:22:39.533302 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-11 00:22:39.533313 | orchestrator | Saturday 11 April 2026 00:22:38 +0000 (0:00:01.058) 0:00:28.057 ******** 2026-04-11 00:22:39.533325 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-11 00:22:39.533337 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-11 00:22:39.533365 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-11 00:22:39.533377 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-11 00:22:39.533388 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-11 00:22:39.533399 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-11 00:22:39.533410 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-11 00:22:39.533421 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:22:39.533433 | orchestrator | 2026-04-11 00:22:39.533446 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-11 00:22:39.533459 | orchestrator | Saturday 11 April 2026 00:22:38 +0000 (0:00:00.196) 0:00:28.254 ******** 2026-04-11 00:22:39.533471 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:22:39.533483 | orchestrator | 2026-04-11 00:22:39.533496 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-11 00:22:39.533509 | orchestrator | Saturday 11 April 2026 00:22:38 +0000 (0:00:00.046) 0:00:28.300 ******** 2026-04-11 00:22:39.533521 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:22:39.533533 | orchestrator | 2026-04-11 00:22:39.533552 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-11 00:22:39.533564 | orchestrator | Saturday 11 April 2026 00:22:38 +0000 (0:00:00.055) 0:00:28.355 ******** 2026-04-11 00:22:39.533575 | orchestrator | changed: [testbed-manager] 2026-04-11 00:22:39.533586 | orchestrator | 2026-04-11 00:22:39.533597 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:22:39.533608 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 00:22:39.533620 | orchestrator | 2026-04-11 00:22:39.533631 | orchestrator | 2026-04-11 00:22:39.533642 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:22:39.533653 | orchestrator | Saturday 11 April 2026 00:22:39 +0000 (0:00:00.488) 0:00:28.844 ******** 2026-04-11 00:22:39.533664 | orchestrator | =============================================================================== 2026-04-11 00:22:39.533675 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.41s 2026-04-11 00:22:39.533685 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.26s 2026-04-11 00:22:39.533730 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.03s 2026-04-11 00:22:39.533741 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.32s 2026-04-11 00:22:39.533752 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-11 00:22:39.533763 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-11 00:22:39.533774 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-11 00:22:39.533784 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-11 00:22:39.533795 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-11 00:22:39.533814 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-11 00:22:39.533824 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-11 00:22:39.533835 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-11 00:22:39.533846 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-11 00:22:39.533856 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-11 00:22:39.533867 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-11 00:22:39.533877 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-11 00:22:39.533889 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2026-04-11 00:22:39.533900 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-04-11 00:22:39.533910 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2026-04-11 00:22:39.533921 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-04-11 00:22:39.725977 | orchestrator | + osism apply squid 2026-04-11 00:22:51.152840 | orchestrator | 2026-04-11 00:22:51 | INFO  | Prepare task for execution of squid. 2026-04-11 00:22:51.231254 | orchestrator | 2026-04-11 00:22:51 | INFO  | Task f3b9f280-3adf-4c7b-ae2a-dfdab67a0e33 (squid) was prepared for execution. 2026-04-11 00:22:51.231378 | orchestrator | 2026-04-11 00:22:51 | INFO  | It takes a moment until task f3b9f280-3adf-4c7b-ae2a-dfdab67a0e33 (squid) has been started and output is visible here. 2026-04-11 00:24:47.336977 | orchestrator | 2026-04-11 00:24:47.337075 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-11 00:24:47.337084 | orchestrator | 2026-04-11 00:24:47.337091 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-11 00:24:47.337098 | orchestrator | Saturday 11 April 2026 00:22:54 +0000 (0:00:00.205) 0:00:00.205 ******** 2026-04-11 00:24:47.337104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 00:24:47.337111 | orchestrator | 2026-04-11 00:24:47.337117 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-11 00:24:47.337123 | orchestrator | Saturday 11 April 2026 00:22:54 +0000 (0:00:00.073) 0:00:00.279 ******** 2026-04-11 00:24:47.337129 | orchestrator | ok: [testbed-manager] 2026-04-11 00:24:47.337136 | orchestrator | 2026-04-11 00:24:47.337142 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-11 00:24:47.337148 | orchestrator | Saturday 11 April 2026 00:22:56 +0000 (0:00:02.302) 0:00:02.582 ******** 2026-04-11 00:24:47.337155 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-11 00:24:47.337192 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-11 00:24:47.337200 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-11 00:24:47.337206 | orchestrator | 2026-04-11 00:24:47.337212 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-11 00:24:47.337218 | orchestrator | Saturday 11 April 2026 00:22:57 +0000 (0:00:01.186) 0:00:03.768 ******** 2026-04-11 00:24:47.337225 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-11 00:24:47.337231 | orchestrator | 2026-04-11 00:24:47.337237 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-11 00:24:47.337243 | orchestrator | Saturday 11 April 2026 00:22:58 +0000 (0:00:01.051) 0:00:04.819 ******** 2026-04-11 00:24:47.337248 | orchestrator | ok: [testbed-manager] 2026-04-11 00:24:47.337255 | orchestrator | 2026-04-11 00:24:47.337262 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-11 00:24:47.337268 | orchestrator | Saturday 11 April 2026 00:22:59 +0000 (0:00:00.338) 0:00:05.158 ******** 2026-04-11 00:24:47.337294 | orchestrator | changed: [testbed-manager] 2026-04-11 00:24:47.337300 | orchestrator | 2026-04-11 00:24:47.337309 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-11 00:24:47.337315 | orchestrator | Saturday 11 April 2026 00:23:00 +0000 (0:00:00.932) 0:00:06.090 ******** 2026-04-11 00:24:47.337321 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-11 00:24:47.337327 | orchestrator | ok: [testbed-manager] 2026-04-11 00:24:47.337333 | orchestrator | 2026-04-11 00:24:47.337339 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-11 00:24:47.337344 | orchestrator | Saturday 11 April 2026 00:23:34 +0000 (0:00:34.201) 0:00:40.291 ******** 2026-04-11 00:24:47.337350 | orchestrator | changed: [testbed-manager] 2026-04-11 00:24:47.337356 | orchestrator | 2026-04-11 00:24:47.337362 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-11 00:24:47.337367 | orchestrator | Saturday 11 April 2026 00:23:46 +0000 (0:00:11.953) 0:00:52.245 ******** 2026-04-11 00:24:47.337373 | orchestrator | Pausing for 60 seconds 2026-04-11 00:24:47.337379 | orchestrator | changed: [testbed-manager] 2026-04-11 00:24:47.337385 | orchestrator | 2026-04-11 00:24:47.337405 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-11 00:24:47.337411 | orchestrator | Saturday 11 April 2026 00:24:46 +0000 (0:01:00.078) 0:01:52.323 ******** 2026-04-11 00:24:47.337417 | orchestrator | ok: [testbed-manager] 2026-04-11 00:24:47.337423 | orchestrator | 2026-04-11 00:24:47.337429 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-11 00:24:47.337435 | orchestrator | Saturday 11 April 2026 00:24:46 +0000 (0:00:00.062) 0:01:52.386 ******** 2026-04-11 00:24:47.337440 | orchestrator | changed: [testbed-manager] 2026-04-11 00:24:47.337446 | orchestrator | 2026-04-11 00:24:47.337452 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:24:47.337458 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:24:47.337463 | orchestrator | 2026-04-11 00:24:47.337469 | orchestrator | 2026-04-11 00:24:47.337475 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:24:47.337481 | orchestrator | Saturday 11 April 2026 00:24:47 +0000 (0:00:00.586) 0:01:52.972 ******** 2026-04-11 00:24:47.337486 | orchestrator | =============================================================================== 2026-04-11 00:24:47.337492 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-11 00:24:47.337498 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.20s 2026-04-11 00:24:47.337503 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.95s 2026-04-11 00:24:47.337509 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.30s 2026-04-11 00:24:47.337515 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-04-11 00:24:47.337520 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2026-04-11 00:24:47.337526 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-04-11 00:24:47.337532 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-04-11 00:24:47.337537 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-04-11 00:24:47.337544 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-04-11 00:24:47.337551 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-11 00:24:47.510641 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-11 00:24:47.510737 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-11 00:24:47.574750 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 00:24:47.574897 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2024.2 2026-04-11 00:24:47.583013 | orchestrator | + set -e 2026-04-11 00:24:47.583099 | orchestrator | + NAMESPACE=kolla/release/2024.2 2026-04-11 00:24:47.583142 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2024.2#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-11 00:24:47.589088 | orchestrator | ++ semver 10.0.0 9.0.0 2026-04-11 00:24:47.649068 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-11 00:24:47.649637 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-11 00:24:59.055268 | orchestrator | 2026-04-11 00:24:59 | INFO  | Prepare task for execution of operator. 2026-04-11 00:24:59.122923 | orchestrator | 2026-04-11 00:24:59 | INFO  | Task 2c5c66ef-c7b9-4491-b4e3-eb4fda29197c (operator) was prepared for execution. 2026-04-11 00:24:59.123047 | orchestrator | 2026-04-11 00:24:59 | INFO  | It takes a moment until task 2c5c66ef-c7b9-4491-b4e3-eb4fda29197c (operator) has been started and output is visible here. 2026-04-11 00:25:13.858686 | orchestrator | 2026-04-11 00:25:13.858864 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-11 00:25:13.858898 | orchestrator | 2026-04-11 00:25:13.858919 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 00:25:13.858938 | orchestrator | Saturday 11 April 2026 00:25:02 +0000 (0:00:00.169) 0:00:00.169 ******** 2026-04-11 00:25:13.858955 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:25:13.858974 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:25:13.858991 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:25:13.859009 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:25:13.859025 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:25:13.859045 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:25:13.859063 | orchestrator | 2026-04-11 00:25:13.859082 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-11 00:25:13.859101 | orchestrator | Saturday 11 April 2026 00:25:05 +0000 (0:00:03.159) 0:00:03.329 ******** 2026-04-11 00:25:13.859130 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:25:13.859150 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:25:13.859168 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:25:13.859187 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:25:13.859205 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:25:13.859223 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:25:13.859243 | orchestrator | 2026-04-11 00:25:13.859262 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-11 00:25:13.859280 | orchestrator | 2026-04-11 00:25:13.859300 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-11 00:25:13.859319 | orchestrator | Saturday 11 April 2026 00:25:06 +0000 (0:00:00.783) 0:00:04.112 ******** 2026-04-11 00:25:13.859337 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:25:13.859357 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:25:13.859376 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:25:13.859394 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:25:13.859412 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:25:13.859431 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:25:13.859451 | orchestrator | 2026-04-11 00:25:13.859470 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-11 00:25:13.859489 | orchestrator | Saturday 11 April 2026 00:25:06 +0000 (0:00:00.138) 0:00:04.250 ******** 2026-04-11 00:25:13.859507 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:25:13.859524 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:25:13.859543 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:25:13.859562 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:25:13.859581 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:25:13.859599 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:25:13.859617 | orchestrator | 2026-04-11 00:25:13.859636 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-11 00:25:13.859654 | orchestrator | Saturday 11 April 2026 00:25:06 +0000 (0:00:00.145) 0:00:04.396 ******** 2026-04-11 00:25:13.859673 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:25:13.859693 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:25:13.859712 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:25:13.859730 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:25:13.859777 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:25:13.859798 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:25:13.859816 | orchestrator | 2026-04-11 00:25:13.859861 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-11 00:25:13.859880 | orchestrator | Saturday 11 April 2026 00:25:06 +0000 (0:00:00.672) 0:00:05.069 ******** 2026-04-11 00:25:13.859899 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:25:13.859918 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:25:13.859936 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:25:13.859954 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:25:13.859972 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:25:13.859992 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:25:13.860010 | orchestrator | 2026-04-11 00:25:13.860029 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-11 00:25:13.860048 | orchestrator | Saturday 11 April 2026 00:25:07 +0000 (0:00:00.864) 0:00:05.934 ******** 2026-04-11 00:25:13.860066 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-11 00:25:13.860085 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-11 00:25:13.860103 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-11 00:25:13.860122 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-11 00:25:13.860140 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-11 00:25:13.860158 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-11 00:25:13.860176 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-11 00:25:13.860194 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-11 00:25:13.860213 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-11 00:25:13.860230 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-11 00:25:13.860248 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-11 00:25:13.860265 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-11 00:25:13.860285 | orchestrator | 2026-04-11 00:25:13.860303 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-11 00:25:13.860321 | orchestrator | Saturday 11 April 2026 00:25:09 +0000 (0:00:01.314) 0:00:07.248 ******** 2026-04-11 00:25:13.860337 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:25:13.860349 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:25:13.860360 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:25:13.860370 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:25:13.860381 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:25:13.860392 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:25:13.860402 | orchestrator | 2026-04-11 00:25:13.860413 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-11 00:25:13.860425 | orchestrator | Saturday 11 April 2026 00:25:10 +0000 (0:00:01.386) 0:00:08.635 ******** 2026-04-11 00:25:13.860436 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 00:25:13.860447 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 00:25:13.860458 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 00:25:13.860469 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 00:25:13.860480 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 00:25:13.860513 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 00:25:13.860524 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-11 00:25:13.860535 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-11 00:25:13.860545 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-11 00:25:13.860556 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-11 00:25:13.860566 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-11 00:25:13.860576 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-11 00:25:13.860598 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-11 00:25:13.860608 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-11 00:25:13.860619 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-11 00:25:13.860630 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-11 00:25:13.860641 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-11 00:25:13.860651 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-11 00:25:13.860662 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-11 00:25:13.860673 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-11 00:25:13.860683 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-11 00:25:13.860694 | orchestrator | 2026-04-11 00:25:13.860704 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-11 00:25:13.860715 | orchestrator | Saturday 11 April 2026 00:25:11 +0000 (0:00:01.288) 0:00:09.924 ******** 2026-04-11 00:25:13.860726 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:25:13.860737 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:25:13.860747 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:25:13.860758 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:25:13.860769 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:25:13.860779 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:25:13.860789 | orchestrator | 2026-04-11 00:25:13.860800 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-11 00:25:13.860872 | orchestrator | Saturday 11 April 2026 00:25:11 +0000 (0:00:00.141) 0:00:10.065 ******** 2026-04-11 00:25:13.860884 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:25:13.860894 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:25:13.860903 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:25:13.860912 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:25:13.860922 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:25:13.860931 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:25:13.860940 | orchestrator | 2026-04-11 00:25:13.860950 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-11 00:25:13.860960 | orchestrator | Saturday 11 April 2026 00:25:12 +0000 (0:00:00.164) 0:00:10.229 ******** 2026-04-11 00:25:13.860969 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:25:13.860979 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:25:13.860988 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:25:13.860997 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:25:13.861007 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:25:13.861016 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:25:13.861025 | orchestrator | 2026-04-11 00:25:13.861035 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-11 00:25:13.861044 | orchestrator | Saturday 11 April 2026 00:25:12 +0000 (0:00:00.503) 0:00:10.733 ******** 2026-04-11 00:25:13.861054 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:25:13.861063 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:25:13.861072 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:25:13.861082 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:25:13.861091 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:25:13.861101 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:25:13.861110 | orchestrator | 2026-04-11 00:25:13.861120 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-11 00:25:13.861129 | orchestrator | Saturday 11 April 2026 00:25:12 +0000 (0:00:00.157) 0:00:10.891 ******** 2026-04-11 00:25:13.861139 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 00:25:13.861148 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:25:13.861158 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 00:25:13.861167 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:25:13.861184 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 00:25:13.861193 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 00:25:13.861203 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:25:13.861212 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:25:13.861221 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-11 00:25:13.861231 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:25:13.861240 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-11 00:25:13.861250 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:25:13.861259 | orchestrator | 2026-04-11 00:25:13.861268 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-11 00:25:13.861278 | orchestrator | Saturday 11 April 2026 00:25:13 +0000 (0:00:00.756) 0:00:11.648 ******** 2026-04-11 00:25:13.861287 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:25:13.861297 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:25:13.861306 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:25:13.861316 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:25:13.861325 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:25:13.861334 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:25:13.861344 | orchestrator | 2026-04-11 00:25:13.861353 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-11 00:25:13.861362 | orchestrator | Saturday 11 April 2026 00:25:13 +0000 (0:00:00.151) 0:00:11.799 ******** 2026-04-11 00:25:13.861372 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:25:13.861381 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:25:13.861391 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:25:13.861400 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:25:13.861417 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:25:15.068090 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:25:15.068161 | orchestrator | 2026-04-11 00:25:15.068168 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-11 00:25:15.068173 | orchestrator | Saturday 11 April 2026 00:25:13 +0000 (0:00:00.179) 0:00:11.978 ******** 2026-04-11 00:25:15.068177 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:25:15.068181 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:25:15.068185 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:25:15.068189 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:25:15.068193 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:25:15.068197 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:25:15.068201 | orchestrator | 2026-04-11 00:25:15.068205 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-11 00:25:15.068222 | orchestrator | Saturday 11 April 2026 00:25:14 +0000 (0:00:00.125) 0:00:12.103 ******** 2026-04-11 00:25:15.068226 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:25:15.068229 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:25:15.068233 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:25:15.068237 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:25:15.068240 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:25:15.068244 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:25:15.068248 | orchestrator | 2026-04-11 00:25:15.068252 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-11 00:25:15.068256 | orchestrator | Saturday 11 April 2026 00:25:14 +0000 (0:00:00.683) 0:00:12.787 ******** 2026-04-11 00:25:15.068260 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:25:15.068263 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:25:15.068267 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:25:15.068271 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:25:15.068274 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:25:15.068278 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:25:15.068282 | orchestrator | 2026-04-11 00:25:15.068286 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:25:15.068291 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 00:25:15.068309 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 00:25:15.068313 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 00:25:15.068317 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 00:25:15.068321 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 00:25:15.068325 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 00:25:15.068328 | orchestrator | 2026-04-11 00:25:15.068332 | orchestrator | 2026-04-11 00:25:15.068336 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:25:15.068340 | orchestrator | Saturday 11 April 2026 00:25:14 +0000 (0:00:00.212) 0:00:12.999 ******** 2026-04-11 00:25:15.068343 | orchestrator | =============================================================================== 2026-04-11 00:25:15.068347 | orchestrator | Gathering Facts --------------------------------------------------------- 3.16s 2026-04-11 00:25:15.068351 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.39s 2026-04-11 00:25:15.068354 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.32s 2026-04-11 00:25:15.068358 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2026-04-11 00:25:15.068363 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2026-04-11 00:25:15.068367 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-04-11 00:25:15.068370 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2026-04-11 00:25:15.068374 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2026-04-11 00:25:15.068378 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2026-04-11 00:25:15.068381 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.50s 2026-04-11 00:25:15.068385 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-04-11 00:25:15.068389 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-04-11 00:25:15.068393 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-11 00:25:15.068397 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-04-11 00:25:15.068400 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-11 00:25:15.068404 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-11 00:25:15.068408 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-11 00:25:15.068412 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-04-11 00:25:15.068415 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-04-11 00:25:15.241319 | orchestrator | + osism apply --environment custom facts 2026-04-11 00:25:16.471885 | orchestrator | 2026-04-11 00:25:16 | INFO  | Trying to run play facts in environment custom 2026-04-11 00:25:26.603931 | orchestrator | 2026-04-11 00:25:26 | INFO  | Prepare task for execution of facts. 2026-04-11 00:25:26.680198 | orchestrator | 2026-04-11 00:25:26 | INFO  | Task c13ca794-5ba1-4ea4-a244-a998cec6895e (facts) was prepared for execution. 2026-04-11 00:25:26.680302 | orchestrator | 2026-04-11 00:25:26 | INFO  | It takes a moment until task c13ca794-5ba1-4ea4-a244-a998cec6895e (facts) has been started and output is visible here. 2026-04-11 00:26:10.324122 | orchestrator | 2026-04-11 00:26:10.324256 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-11 00:26:10.324274 | orchestrator | 2026-04-11 00:26:10.324285 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-11 00:26:10.324297 | orchestrator | Saturday 11 April 2026 00:25:29 +0000 (0:00:00.111) 0:00:00.111 ******** 2026-04-11 00:26:10.324307 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:10.324318 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:10.324329 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:10.324339 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:10.324349 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:10.324361 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:10.324379 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:10.324396 | orchestrator | 2026-04-11 00:26:10.324411 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-11 00:26:10.324428 | orchestrator | Saturday 11 April 2026 00:25:31 +0000 (0:00:01.367) 0:00:01.479 ******** 2026-04-11 00:26:10.324444 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:10.324459 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:10.324473 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:10.324486 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:10.324502 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:10.324519 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:10.324534 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:10.324550 | orchestrator | 2026-04-11 00:26:10.324568 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-11 00:26:10.324585 | orchestrator | 2026-04-11 00:26:10.324603 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-11 00:26:10.324615 | orchestrator | Saturday 11 April 2026 00:25:32 +0000 (0:00:01.195) 0:00:02.674 ******** 2026-04-11 00:26:10.324624 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:10.324634 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:10.324644 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:10.324656 | orchestrator | 2026-04-11 00:26:10.324667 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-11 00:26:10.324679 | orchestrator | Saturday 11 April 2026 00:25:32 +0000 (0:00:00.101) 0:00:02.775 ******** 2026-04-11 00:26:10.324691 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:10.324702 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:10.324713 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:10.324725 | orchestrator | 2026-04-11 00:26:10.324736 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-11 00:26:10.324747 | orchestrator | Saturday 11 April 2026 00:25:32 +0000 (0:00:00.183) 0:00:02.958 ******** 2026-04-11 00:26:10.324759 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:10.324770 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:10.324781 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:10.324792 | orchestrator | 2026-04-11 00:26:10.324803 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-11 00:26:10.324814 | orchestrator | Saturday 11 April 2026 00:25:32 +0000 (0:00:00.199) 0:00:03.158 ******** 2026-04-11 00:26:10.324827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:26:10.324839 | orchestrator | 2026-04-11 00:26:10.324929 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-11 00:26:10.324945 | orchestrator | Saturday 11 April 2026 00:25:32 +0000 (0:00:00.121) 0:00:03.280 ******** 2026-04-11 00:26:10.324956 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:10.324966 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:10.324976 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:10.324985 | orchestrator | 2026-04-11 00:26:10.324995 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-11 00:26:10.325028 | orchestrator | Saturday 11 April 2026 00:25:33 +0000 (0:00:00.432) 0:00:03.712 ******** 2026-04-11 00:26:10.325038 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:26:10.325048 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:26:10.325057 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:26:10.325067 | orchestrator | 2026-04-11 00:26:10.325076 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-11 00:26:10.325086 | orchestrator | Saturday 11 April 2026 00:25:33 +0000 (0:00:00.142) 0:00:03.855 ******** 2026-04-11 00:26:10.325095 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:10.325105 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:10.325114 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:10.325124 | orchestrator | 2026-04-11 00:26:10.325133 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-11 00:26:10.325143 | orchestrator | Saturday 11 April 2026 00:25:34 +0000 (0:00:01.012) 0:00:04.868 ******** 2026-04-11 00:26:10.325160 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:10.325176 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:10.325192 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:10.325210 | orchestrator | 2026-04-11 00:26:10.325228 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-11 00:26:10.325245 | orchestrator | Saturday 11 April 2026 00:25:34 +0000 (0:00:00.433) 0:00:05.301 ******** 2026-04-11 00:26:10.325259 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:10.325269 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:10.325278 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:10.325288 | orchestrator | 2026-04-11 00:26:10.325298 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-11 00:26:10.325308 | orchestrator | Saturday 11 April 2026 00:25:35 +0000 (0:00:01.059) 0:00:06.360 ******** 2026-04-11 00:26:10.325317 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:10.325327 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:10.325336 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:10.325346 | orchestrator | 2026-04-11 00:26:10.325356 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-11 00:26:10.325365 | orchestrator | Saturday 11 April 2026 00:25:52 +0000 (0:00:16.800) 0:00:23.161 ******** 2026-04-11 00:26:10.325375 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:26:10.325384 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:26:10.325394 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:26:10.325404 | orchestrator | 2026-04-11 00:26:10.325413 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-11 00:26:10.325448 | orchestrator | Saturday 11 April 2026 00:25:52 +0000 (0:00:00.089) 0:00:23.251 ******** 2026-04-11 00:26:10.325459 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:10.325469 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:10.325478 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:10.325487 | orchestrator | 2026-04-11 00:26:10.325497 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-11 00:26:10.325506 | orchestrator | Saturday 11 April 2026 00:26:01 +0000 (0:00:08.618) 0:00:31.869 ******** 2026-04-11 00:26:10.325516 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:10.325525 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:10.325535 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:10.325544 | orchestrator | 2026-04-11 00:26:10.325554 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-11 00:26:10.325564 | orchestrator | Saturday 11 April 2026 00:26:01 +0000 (0:00:00.433) 0:00:32.303 ******** 2026-04-11 00:26:10.325573 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-11 00:26:10.325583 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-11 00:26:10.325592 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-11 00:26:10.325602 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-11 00:26:10.325647 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-11 00:26:10.325657 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-11 00:26:10.325666 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-11 00:26:10.325676 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-11 00:26:10.325685 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-11 00:26:10.325695 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-11 00:26:10.325704 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-11 00:26:10.325713 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-11 00:26:10.325723 | orchestrator | 2026-04-11 00:26:10.325732 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-11 00:26:10.325742 | orchestrator | Saturday 11 April 2026 00:26:05 +0000 (0:00:03.484) 0:00:35.787 ******** 2026-04-11 00:26:10.325751 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:10.325760 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:10.325770 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:10.325779 | orchestrator | 2026-04-11 00:26:10.325789 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 00:26:10.325799 | orchestrator | 2026-04-11 00:26:10.325808 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 00:26:10.325818 | orchestrator | Saturday 11 April 2026 00:26:06 +0000 (0:00:01.274) 0:00:37.061 ******** 2026-04-11 00:26:10.325827 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:10.325837 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:10.325846 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:10.325855 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:10.325888 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:10.325899 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:10.325908 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:10.325917 | orchestrator | 2026-04-11 00:26:10.325927 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:26:10.325937 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:26:10.325947 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:26:10.325959 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:26:10.325968 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:26:10.325978 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:26:10.325987 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:26:10.325997 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:26:10.326006 | orchestrator | 2026-04-11 00:26:10.326068 | orchestrator | 2026-04-11 00:26:10.326080 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:26:10.326090 | orchestrator | Saturday 11 April 2026 00:26:10 +0000 (0:00:03.678) 0:00:40.740 ******** 2026-04-11 00:26:10.326100 | orchestrator | =============================================================================== 2026-04-11 00:26:10.326109 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.80s 2026-04-11 00:26:10.326119 | orchestrator | Install required packages (Debian) -------------------------------------- 8.62s 2026-04-11 00:26:10.326136 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.68s 2026-04-11 00:26:10.326146 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2026-04-11 00:26:10.326155 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2026-04-11 00:26:10.326165 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2026-04-11 00:26:10.326191 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2026-04-11 00:26:10.488342 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-04-11 00:26:10.488470 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.01s 2026-04-11 00:26:10.488488 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-04-11 00:26:10.488500 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2026-04-11 00:26:10.488510 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-04-11 00:26:10.488522 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-11 00:26:10.488533 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-04-11 00:26:10.488544 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-04-11 00:26:10.488554 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-04-11 00:26:10.488566 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-04-11 00:26:10.488577 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-04-11 00:26:10.672785 | orchestrator | + osism apply bootstrap 2026-04-11 00:26:21.961455 | orchestrator | 2026-04-11 00:26:21 | INFO  | Prepare task for execution of bootstrap. 2026-04-11 00:26:22.039530 | orchestrator | 2026-04-11 00:26:22 | INFO  | Task 86ac203c-0c91-43ab-bef3-efc35d3972f1 (bootstrap) was prepared for execution. 2026-04-11 00:26:22.039652 | orchestrator | 2026-04-11 00:26:22 | INFO  | It takes a moment until task 86ac203c-0c91-43ab-bef3-efc35d3972f1 (bootstrap) has been started and output is visible here. 2026-04-11 00:26:38.013244 | orchestrator | 2026-04-11 00:26:38.013373 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-11 00:26:38.013398 | orchestrator | 2026-04-11 00:26:38.013418 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-11 00:26:38.013437 | orchestrator | Saturday 11 April 2026 00:26:25 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-04-11 00:26:38.013456 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:38.013476 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:38.013494 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:38.013513 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:38.013533 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:38.013553 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:38.013571 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:38.013591 | orchestrator | 2026-04-11 00:26:38.013606 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 00:26:38.013617 | orchestrator | 2026-04-11 00:26:38.013628 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 00:26:38.013668 | orchestrator | Saturday 11 April 2026 00:26:25 +0000 (0:00:00.305) 0:00:00.492 ******** 2026-04-11 00:26:38.013679 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:38.013690 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:38.013701 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:38.013712 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:38.013722 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:38.013733 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:38.013746 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:38.013758 | orchestrator | 2026-04-11 00:26:38.013771 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-11 00:26:38.013846 | orchestrator | 2026-04-11 00:26:38.013861 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 00:26:38.013874 | orchestrator | Saturday 11 April 2026 00:26:30 +0000 (0:00:05.209) 0:00:05.702 ******** 2026-04-11 00:26:38.013888 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-11 00:26:38.013902 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-11 00:26:38.013914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-11 00:26:38.013924 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-11 00:26:38.013935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 00:26:38.013946 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-11 00:26:38.013957 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-11 00:26:38.013967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 00:26:38.013978 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 00:26:38.013990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 00:26:38.014000 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-11 00:26:38.014011 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 00:26:38.014090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 00:26:38.014101 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-11 00:26:38.014121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 00:26:38.014139 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-11 00:26:38.014158 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 00:26:38.014176 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 00:26:38.014193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 00:26:38.014210 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-11 00:26:38.014227 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:26:38.014246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-11 00:26:38.014265 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 00:26:38.014286 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:26:38.014304 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-11 00:26:38.014323 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 00:26:38.014337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 00:26:38.014355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 00:26:38.014397 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 00:26:38.014417 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-11 00:26:38.014436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 00:26:38.014455 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 00:26:38.014473 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 00:26:38.014492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 00:26:38.014511 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 00:26:38.014529 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 00:26:38.014547 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 00:26:38.014564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 00:26:38.014582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:26:38.014597 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 00:26:38.014614 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 00:26:38.014632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:26:38.014666 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 00:26:38.014686 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:26:38.014705 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 00:26:38.014724 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 00:26:38.014771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:26:38.014784 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 00:26:38.014795 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 00:26:38.014846 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 00:26:38.014858 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:26:38.014869 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:26:38.014880 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 00:26:38.014898 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:26:38.014916 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 00:26:38.014935 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:26:38.014953 | orchestrator | 2026-04-11 00:26:38.014972 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-11 00:26:38.014991 | orchestrator | 2026-04-11 00:26:38.015011 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-11 00:26:38.015030 | orchestrator | Saturday 11 April 2026 00:26:31 +0000 (0:00:00.412) 0:00:06.114 ******** 2026-04-11 00:26:38.015042 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:38.015053 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:38.015064 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:38.015075 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:38.015086 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:38.015096 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:38.015107 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:38.015117 | orchestrator | 2026-04-11 00:26:38.015128 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-11 00:26:38.015139 | orchestrator | Saturday 11 April 2026 00:26:32 +0000 (0:00:01.239) 0:00:07.354 ******** 2026-04-11 00:26:38.015150 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:38.015160 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:38.015171 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:38.015182 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:38.015192 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:38.015203 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:38.015213 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:38.015224 | orchestrator | 2026-04-11 00:26:38.015234 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-11 00:26:38.015245 | orchestrator | Saturday 11 April 2026 00:26:33 +0000 (0:00:01.225) 0:00:08.580 ******** 2026-04-11 00:26:38.015257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:26:38.015270 | orchestrator | 2026-04-11 00:26:38.015282 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-11 00:26:38.015292 | orchestrator | Saturday 11 April 2026 00:26:34 +0000 (0:00:00.284) 0:00:08.864 ******** 2026-04-11 00:26:38.015303 | orchestrator | changed: [testbed-manager] 2026-04-11 00:26:38.015314 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:38.015325 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:38.015335 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:38.015346 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:38.015356 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:38.015367 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:38.015377 | orchestrator | 2026-04-11 00:26:38.015389 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-11 00:26:38.015409 | orchestrator | Saturday 11 April 2026 00:26:35 +0000 (0:00:01.427) 0:00:10.292 ******** 2026-04-11 00:26:38.015420 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:26:38.015432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:26:38.015445 | orchestrator | 2026-04-11 00:26:38.015456 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-11 00:26:38.015473 | orchestrator | Saturday 11 April 2026 00:26:35 +0000 (0:00:00.253) 0:00:10.546 ******** 2026-04-11 00:26:38.015485 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:38.015495 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:38.015506 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:38.015517 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:38.015527 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:38.015538 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:38.015548 | orchestrator | 2026-04-11 00:26:38.015559 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-11 00:26:38.015570 | orchestrator | Saturday 11 April 2026 00:26:36 +0000 (0:00:01.055) 0:00:11.601 ******** 2026-04-11 00:26:38.015581 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:26:38.015591 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:38.015602 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:38.015612 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:38.015623 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:38.015633 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:38.015644 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:38.015654 | orchestrator | 2026-04-11 00:26:38.015665 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-11 00:26:38.015675 | orchestrator | Saturday 11 April 2026 00:26:37 +0000 (0:00:00.657) 0:00:12.259 ******** 2026-04-11 00:26:38.015686 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:26:38.015697 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:26:38.015707 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:26:38.015718 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:26:38.015729 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:26:38.015747 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:26:38.015766 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:38.015784 | orchestrator | 2026-04-11 00:26:38.015870 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-11 00:26:38.015897 | orchestrator | Saturday 11 April 2026 00:26:37 +0000 (0:00:00.428) 0:00:12.687 ******** 2026-04-11 00:26:38.015917 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:26:38.015936 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:26:38.015959 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:26:49.764969 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:26:49.765086 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:26:49.765102 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:26:49.765113 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:26:49.765125 | orchestrator | 2026-04-11 00:26:49.765138 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-11 00:26:49.765150 | orchestrator | Saturday 11 April 2026 00:26:38 +0000 (0:00:00.209) 0:00:12.897 ******** 2026-04-11 00:26:49.765163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:26:49.765190 | orchestrator | 2026-04-11 00:26:49.765202 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-11 00:26:49.765213 | orchestrator | Saturday 11 April 2026 00:26:38 +0000 (0:00:00.297) 0:00:13.195 ******** 2026-04-11 00:26:49.765225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:26:49.765259 | orchestrator | 2026-04-11 00:26:49.765271 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-11 00:26:49.765283 | orchestrator | Saturday 11 April 2026 00:26:38 +0000 (0:00:00.297) 0:00:13.493 ******** 2026-04-11 00:26:49.765294 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.765306 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:49.765316 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:49.765327 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.765337 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.765348 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.765358 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:49.765369 | orchestrator | 2026-04-11 00:26:49.765380 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-11 00:26:49.765390 | orchestrator | Saturday 11 April 2026 00:26:40 +0000 (0:00:01.438) 0:00:14.932 ******** 2026-04-11 00:26:49.765401 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:26:49.765412 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:26:49.765422 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:26:49.765435 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:26:49.765447 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:26:49.765458 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:26:49.765470 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:26:49.765482 | orchestrator | 2026-04-11 00:26:49.765494 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-11 00:26:49.765507 | orchestrator | Saturday 11 April 2026 00:26:40 +0000 (0:00:00.249) 0:00:15.181 ******** 2026-04-11 00:26:49.765519 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.765532 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:49.765547 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.765566 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:49.765586 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.765603 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:49.765621 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.765640 | orchestrator | 2026-04-11 00:26:49.765659 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-11 00:26:49.765679 | orchestrator | Saturday 11 April 2026 00:26:40 +0000 (0:00:00.606) 0:00:15.787 ******** 2026-04-11 00:26:49.765697 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:26:49.765714 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:26:49.765726 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:26:49.765737 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:26:49.765747 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:26:49.765758 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:26:49.765769 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:26:49.765847 | orchestrator | 2026-04-11 00:26:49.765859 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-11 00:26:49.765871 | orchestrator | Saturday 11 April 2026 00:26:41 +0000 (0:00:00.236) 0:00:16.023 ******** 2026-04-11 00:26:49.765881 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.765892 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:49.765903 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:49.765914 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:49.765924 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:49.765935 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:49.765945 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:49.765956 | orchestrator | 2026-04-11 00:26:49.765967 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-11 00:26:49.765977 | orchestrator | Saturday 11 April 2026 00:26:41 +0000 (0:00:00.538) 0:00:16.562 ******** 2026-04-11 00:26:49.765988 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.765999 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:49.766075 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:49.766089 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:49.766100 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:49.766111 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:49.766122 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:49.766132 | orchestrator | 2026-04-11 00:26:49.766143 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-11 00:26:49.766154 | orchestrator | Saturday 11 April 2026 00:26:42 +0000 (0:00:01.071) 0:00:17.634 ******** 2026-04-11 00:26:49.766165 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.766176 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.766186 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.766197 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:49.766207 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.766218 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:49.766229 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:49.766240 | orchestrator | 2026-04-11 00:26:49.766251 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-11 00:26:49.766262 | orchestrator | Saturday 11 April 2026 00:26:43 +0000 (0:00:01.012) 0:00:18.646 ******** 2026-04-11 00:26:49.766295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:26:49.766307 | orchestrator | 2026-04-11 00:26:49.766318 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-11 00:26:49.766329 | orchestrator | Saturday 11 April 2026 00:26:44 +0000 (0:00:00.289) 0:00:18.936 ******** 2026-04-11 00:26:49.766339 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:26:49.766350 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:26:49.766361 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:26:49.766371 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:49.766382 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:49.766393 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:49.766403 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:26:49.766414 | orchestrator | 2026-04-11 00:26:49.766425 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-11 00:26:49.766436 | orchestrator | Saturday 11 April 2026 00:26:45 +0000 (0:00:01.251) 0:00:20.188 ******** 2026-04-11 00:26:49.766446 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.766457 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:49.766468 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:49.766478 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:49.766489 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.766499 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.766510 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.766520 | orchestrator | 2026-04-11 00:26:49.766531 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-11 00:26:49.766544 | orchestrator | Saturday 11 April 2026 00:26:45 +0000 (0:00:00.205) 0:00:20.393 ******** 2026-04-11 00:26:49.766563 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.766582 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:49.766599 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:49.766616 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:49.766634 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.766652 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.766672 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.766690 | orchestrator | 2026-04-11 00:26:49.766708 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-11 00:26:49.766724 | orchestrator | Saturday 11 April 2026 00:26:45 +0000 (0:00:00.197) 0:00:20.590 ******** 2026-04-11 00:26:49.766735 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.766746 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:49.766757 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:49.766767 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:49.766844 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.766856 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.766866 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.766877 | orchestrator | 2026-04-11 00:26:49.766888 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-11 00:26:49.766898 | orchestrator | Saturday 11 April 2026 00:26:45 +0000 (0:00:00.193) 0:00:20.784 ******** 2026-04-11 00:26:49.766910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:26:49.766923 | orchestrator | 2026-04-11 00:26:49.766934 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-11 00:26:49.766945 | orchestrator | Saturday 11 April 2026 00:26:46 +0000 (0:00:00.265) 0:00:21.049 ******** 2026-04-11 00:26:49.766969 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.766980 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:49.766990 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:49.767001 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:49.767012 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.767022 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.767033 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.767043 | orchestrator | 2026-04-11 00:26:49.767054 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-11 00:26:49.767070 | orchestrator | Saturday 11 April 2026 00:26:46 +0000 (0:00:00.517) 0:00:21.567 ******** 2026-04-11 00:26:49.767082 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:26:49.767093 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:26:49.767103 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:26:49.767114 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:26:49.767125 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:26:49.767135 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:26:49.767146 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:26:49.767156 | orchestrator | 2026-04-11 00:26:49.767167 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-11 00:26:49.767178 | orchestrator | Saturday 11 April 2026 00:26:46 +0000 (0:00:00.212) 0:00:21.780 ******** 2026-04-11 00:26:49.767188 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.767199 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:49.767210 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:26:49.767220 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.767231 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.767242 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.767252 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:26:49.767263 | orchestrator | 2026-04-11 00:26:49.767274 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-11 00:26:49.767284 | orchestrator | Saturday 11 April 2026 00:26:48 +0000 (0:00:01.163) 0:00:22.944 ******** 2026-04-11 00:26:49.767295 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.767306 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:26:49.767316 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:26:49.767327 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.767337 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:26:49.767348 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.767358 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:26:49.767369 | orchestrator | 2026-04-11 00:26:49.767380 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-11 00:26:49.767390 | orchestrator | Saturday 11 April 2026 00:26:48 +0000 (0:00:00.572) 0:00:23.516 ******** 2026-04-11 00:26:49.767401 | orchestrator | ok: [testbed-manager] 2026-04-11 00:26:49.767412 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:26:49.767422 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:26:49.767433 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:26:49.767453 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.518957 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:27:32.519100 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:27:32.519107 | orchestrator | 2026-04-11 00:27:32.519114 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-11 00:27:32.519120 | orchestrator | Saturday 11 April 2026 00:26:49 +0000 (0:00:01.122) 0:00:24.639 ******** 2026-04-11 00:27:32.519124 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519129 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519133 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519137 | orchestrator | changed: [testbed-manager] 2026-04-11 00:27:32.519141 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:27:32.519144 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:27:32.519148 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:27:32.519152 | orchestrator | 2026-04-11 00:27:32.519156 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-11 00:27:32.519160 | orchestrator | Saturday 11 April 2026 00:27:07 +0000 (0:00:17.428) 0:00:42.068 ******** 2026-04-11 00:27:32.519164 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519168 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519171 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519175 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519179 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519182 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519186 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519190 | orchestrator | 2026-04-11 00:27:32.519194 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-11 00:27:32.519197 | orchestrator | Saturday 11 April 2026 00:27:07 +0000 (0:00:00.216) 0:00:42.285 ******** 2026-04-11 00:27:32.519201 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519205 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519208 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519212 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519216 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519219 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519223 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519227 | orchestrator | 2026-04-11 00:27:32.519230 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-11 00:27:32.519234 | orchestrator | Saturday 11 April 2026 00:27:07 +0000 (0:00:00.203) 0:00:42.488 ******** 2026-04-11 00:27:32.519239 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519242 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519246 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519250 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519253 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519257 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519261 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519264 | orchestrator | 2026-04-11 00:27:32.519268 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-11 00:27:32.519272 | orchestrator | Saturday 11 April 2026 00:27:07 +0000 (0:00:00.214) 0:00:42.702 ******** 2026-04-11 00:27:32.519277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:27:32.519283 | orchestrator | 2026-04-11 00:27:32.519286 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-11 00:27:32.519290 | orchestrator | Saturday 11 April 2026 00:27:08 +0000 (0:00:00.312) 0:00:43.015 ******** 2026-04-11 00:27:32.519294 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519298 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519301 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519305 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519309 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519312 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519316 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519320 | orchestrator | 2026-04-11 00:27:32.519323 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-11 00:27:32.519333 | orchestrator | Saturday 11 April 2026 00:27:10 +0000 (0:00:01.868) 0:00:44.884 ******** 2026-04-11 00:27:32.519337 | orchestrator | changed: [testbed-manager] 2026-04-11 00:27:32.519340 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:27:32.519358 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:27:32.519362 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:27:32.519365 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:27:32.519369 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:27:32.519373 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:27:32.519376 | orchestrator | 2026-04-11 00:27:32.519380 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-11 00:27:32.519384 | orchestrator | Saturday 11 April 2026 00:27:11 +0000 (0:00:01.136) 0:00:46.021 ******** 2026-04-11 00:27:32.519388 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519391 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519395 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519399 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519402 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519406 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519410 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519413 | orchestrator | 2026-04-11 00:27:32.519417 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-11 00:27:32.519421 | orchestrator | Saturday 11 April 2026 00:27:12 +0000 (0:00:00.862) 0:00:46.883 ******** 2026-04-11 00:27:32.519426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:27:32.519431 | orchestrator | 2026-04-11 00:27:32.519435 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-11 00:27:32.519440 | orchestrator | Saturday 11 April 2026 00:27:12 +0000 (0:00:00.292) 0:00:47.175 ******** 2026-04-11 00:27:32.519443 | orchestrator | changed: [testbed-manager] 2026-04-11 00:27:32.519447 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:27:32.519451 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:27:32.519455 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:27:32.519458 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:27:32.519462 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:27:32.519466 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:27:32.519469 | orchestrator | 2026-04-11 00:27:32.519486 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-11 00:27:32.519490 | orchestrator | Saturday 11 April 2026 00:27:13 +0000 (0:00:01.059) 0:00:48.235 ******** 2026-04-11 00:27:32.519494 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:27:32.519499 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:27:32.519503 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:27:32.519507 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:27:32.519511 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:27:32.519515 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:27:32.519520 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:27:32.519524 | orchestrator | 2026-04-11 00:27:32.519528 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-11 00:27:32.519532 | orchestrator | Saturday 11 April 2026 00:27:13 +0000 (0:00:00.217) 0:00:48.452 ******** 2026-04-11 00:27:32.519536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:27:32.519541 | orchestrator | 2026-04-11 00:27:32.519545 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-11 00:27:32.519549 | orchestrator | Saturday 11 April 2026 00:27:13 +0000 (0:00:00.287) 0:00:48.739 ******** 2026-04-11 00:27:32.519553 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519562 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519567 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519571 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519575 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519579 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519583 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519588 | orchestrator | 2026-04-11 00:27:32.519592 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-11 00:27:32.519596 | orchestrator | Saturday 11 April 2026 00:27:15 +0000 (0:00:01.840) 0:00:50.580 ******** 2026-04-11 00:27:32.519600 | orchestrator | changed: [testbed-manager] 2026-04-11 00:27:32.519605 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:27:32.519609 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:27:32.519613 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:27:32.519617 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:27:32.519622 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:27:32.519626 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:27:32.519630 | orchestrator | 2026-04-11 00:27:32.519634 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-11 00:27:32.519638 | orchestrator | Saturday 11 April 2026 00:27:17 +0000 (0:00:01.241) 0:00:51.821 ******** 2026-04-11 00:27:32.519642 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:27:32.519646 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:27:32.519651 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:27:32.519655 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:27:32.519659 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:27:32.519663 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:27:32.519693 | orchestrator | changed: [testbed-manager] 2026-04-11 00:27:32.519697 | orchestrator | 2026-04-11 00:27:32.519702 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-11 00:27:32.519706 | orchestrator | Saturday 11 April 2026 00:27:29 +0000 (0:00:12.426) 0:01:04.248 ******** 2026-04-11 00:27:32.519711 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519715 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519720 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519724 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519727 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519731 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519735 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519738 | orchestrator | 2026-04-11 00:27:32.519742 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-11 00:27:32.519746 | orchestrator | Saturday 11 April 2026 00:27:30 +0000 (0:00:01.200) 0:01:05.449 ******** 2026-04-11 00:27:32.519749 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519753 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519761 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519764 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519768 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519772 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519775 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519779 | orchestrator | 2026-04-11 00:27:32.519783 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-11 00:27:32.519786 | orchestrator | Saturday 11 April 2026 00:27:31 +0000 (0:00:01.004) 0:01:06.453 ******** 2026-04-11 00:27:32.519790 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519794 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519797 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519801 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519805 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519808 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519812 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519816 | orchestrator | 2026-04-11 00:27:32.519820 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-11 00:27:32.519823 | orchestrator | Saturday 11 April 2026 00:27:31 +0000 (0:00:00.234) 0:01:06.688 ******** 2026-04-11 00:27:32.519831 | orchestrator | ok: [testbed-manager] 2026-04-11 00:27:32.519835 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:27:32.519839 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:27:32.519842 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:27:32.519846 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:27:32.519850 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:27:32.519853 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:27:32.519857 | orchestrator | 2026-04-11 00:27:32.519861 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-11 00:27:32.519864 | orchestrator | Saturday 11 April 2026 00:27:32 +0000 (0:00:00.259) 0:01:06.948 ******** 2026-04-11 00:27:32.519868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:27:32.519872 | orchestrator | 2026-04-11 00:27:32.519879 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-11 00:29:57.606998 | orchestrator | Saturday 11 April 2026 00:27:32 +0000 (0:00:00.345) 0:01:07.294 ******** 2026-04-11 00:29:57.607102 | orchestrator | ok: [testbed-manager] 2026-04-11 00:29:57.607116 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:29:57.607124 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:29:57.607131 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:29:57.607138 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:29:57.607146 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:29:57.607153 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:29:57.607161 | orchestrator | 2026-04-11 00:29:57.607169 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-11 00:29:57.607177 | orchestrator | Saturday 11 April 2026 00:27:34 +0000 (0:00:01.865) 0:01:09.160 ******** 2026-04-11 00:29:57.607185 | orchestrator | changed: [testbed-manager] 2026-04-11 00:29:57.607194 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:29:57.607201 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:29:57.607208 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:29:57.607216 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:29:57.607223 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:29:57.607230 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:29:57.607237 | orchestrator | 2026-04-11 00:29:57.607245 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-11 00:29:57.607252 | orchestrator | Saturday 11 April 2026 00:27:35 +0000 (0:00:00.646) 0:01:09.806 ******** 2026-04-11 00:29:57.607258 | orchestrator | ok: [testbed-manager] 2026-04-11 00:29:57.607264 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:29:57.607270 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:29:57.607276 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:29:57.607281 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:29:57.607288 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:29:57.607294 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:29:57.607300 | orchestrator | 2026-04-11 00:29:57.607307 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-11 00:29:57.607313 | orchestrator | Saturday 11 April 2026 00:27:35 +0000 (0:00:00.294) 0:01:10.101 ******** 2026-04-11 00:29:57.607319 | orchestrator | ok: [testbed-manager] 2026-04-11 00:29:57.607325 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:29:57.607330 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:29:57.607336 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:29:57.607342 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:29:57.607349 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:29:57.607408 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:29:57.607417 | orchestrator | 2026-04-11 00:29:57.607424 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-11 00:29:57.607431 | orchestrator | Saturday 11 April 2026 00:27:36 +0000 (0:00:01.418) 0:01:11.519 ******** 2026-04-11 00:29:57.607438 | orchestrator | changed: [testbed-manager] 2026-04-11 00:29:57.607445 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:29:57.607456 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:29:57.607487 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:29:57.607495 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:29:57.607502 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:29:57.607509 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:29:57.607516 | orchestrator | 2026-04-11 00:29:57.607523 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-11 00:29:57.607531 | orchestrator | Saturday 11 April 2026 00:27:39 +0000 (0:00:02.313) 0:01:13.832 ******** 2026-04-11 00:29:57.607539 | orchestrator | ok: [testbed-manager] 2026-04-11 00:29:57.607547 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:29:57.607556 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:29:57.607566 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:29:57.607575 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:29:57.607583 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:29:57.607590 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:29:57.607598 | orchestrator | 2026-04-11 00:29:57.607606 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-11 00:29:57.607614 | orchestrator | Saturday 11 April 2026 00:27:41 +0000 (0:00:02.410) 0:01:16.243 ******** 2026-04-11 00:29:57.607622 | orchestrator | ok: [testbed-manager] 2026-04-11 00:29:57.607630 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:29:57.607638 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:29:57.607647 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:29:57.607655 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:29:57.607663 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:29:57.607671 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:29:57.607679 | orchestrator | 2026-04-11 00:29:57.607687 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-11 00:29:57.607695 | orchestrator | Saturday 11 April 2026 00:28:24 +0000 (0:00:43.367) 0:01:59.611 ******** 2026-04-11 00:29:57.607703 | orchestrator | changed: [testbed-manager] 2026-04-11 00:29:57.607712 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:29:57.607720 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:29:57.607728 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:29:57.607736 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:29:57.607743 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:29:57.607751 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:29:57.607759 | orchestrator | 2026-04-11 00:29:57.607784 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-11 00:29:57.607793 | orchestrator | Saturday 11 April 2026 00:29:42 +0000 (0:01:17.561) 0:03:17.173 ******** 2026-04-11 00:29:57.607800 | orchestrator | ok: [testbed-manager] 2026-04-11 00:29:57.607807 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:29:57.607814 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:29:57.607821 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:29:57.607828 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:29:57.607833 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:29:57.607839 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:29:57.607845 | orchestrator | 2026-04-11 00:29:57.607851 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-11 00:29:57.607860 | orchestrator | Saturday 11 April 2026 00:29:44 +0000 (0:00:01.839) 0:03:19.012 ******** 2026-04-11 00:29:57.607867 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:29:57.607874 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:29:57.607881 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:29:57.607888 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:29:57.607895 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:29:57.607902 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:29:57.607909 | orchestrator | changed: [testbed-manager] 2026-04-11 00:29:57.607916 | orchestrator | 2026-04-11 00:29:57.607924 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-11 00:29:57.607931 | orchestrator | Saturday 11 April 2026 00:29:56 +0000 (0:00:12.251) 0:03:31.263 ******** 2026-04-11 00:29:57.607965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-11 00:29:57.607989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-11 00:29:57.607999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-11 00:29:57.608008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-11 00:29:57.608015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-11 00:29:57.608022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-11 00:29:57.608030 | orchestrator | 2026-04-11 00:29:57.608037 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-11 00:29:57.608044 | orchestrator | Saturday 11 April 2026 00:29:56 +0000 (0:00:00.385) 0:03:31.649 ******** 2026-04-11 00:29:57.608052 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-11 00:29:57.608062 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:29:57.608070 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-11 00:29:57.608077 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-11 00:29:57.608083 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:29:57.608089 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:29:57.608095 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-11 00:29:57.608103 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:29:57.608110 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 00:29:57.608117 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 00:29:57.608124 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 00:29:57.608131 | orchestrator | 2026-04-11 00:29:57.608139 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-11 00:29:57.608146 | orchestrator | Saturday 11 April 2026 00:29:57 +0000 (0:00:00.686) 0:03:32.335 ******** 2026-04-11 00:29:57.608160 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-11 00:29:57.608168 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-11 00:29:57.608176 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-11 00:29:57.608183 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-11 00:29:57.608190 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-11 00:29:57.608203 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-11 00:30:07.575168 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-11 00:30:07.575252 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-11 00:30:07.575258 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-11 00:30:07.575263 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-11 00:30:07.575268 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:30:07.575273 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-11 00:30:07.575277 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-11 00:30:07.575281 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-11 00:30:07.575285 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-11 00:30:07.575289 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-11 00:30:07.575293 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-11 00:30:07.575296 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-11 00:30:07.575301 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-11 00:30:07.575304 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-11 00:30:07.575308 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-11 00:30:07.575312 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-11 00:30:07.575315 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-11 00:30:07.575319 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:30:07.575323 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-11 00:30:07.575327 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-11 00:30:07.575330 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-11 00:30:07.575334 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-11 00:30:07.575440 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-11 00:30:07.575445 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-11 00:30:07.575449 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-11 00:30:07.575453 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-11 00:30:07.575457 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-11 00:30:07.575483 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:30:07.575497 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-11 00:30:07.575501 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-11 00:30:07.575505 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-11 00:30:07.575509 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-11 00:30:07.575512 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-11 00:30:07.575516 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-11 00:30:07.575520 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-11 00:30:07.575523 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-11 00:30:07.575527 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-11 00:30:07.575531 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:30:07.575534 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-11 00:30:07.575538 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-11 00:30:07.575542 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-11 00:30:07.575546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-11 00:30:07.575549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-11 00:30:07.575564 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-11 00:30:07.575568 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-11 00:30:07.575572 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-11 00:30:07.575576 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-11 00:30:07.575579 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-11 00:30:07.575583 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-11 00:30:07.575586 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-11 00:30:07.575590 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-11 00:30:07.575594 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-11 00:30:07.575598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-11 00:30:07.575601 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-11 00:30:07.575605 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-11 00:30:07.575609 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-11 00:30:07.575613 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-11 00:30:07.575616 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-11 00:30:07.575620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-11 00:30:07.575623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-11 00:30:07.575632 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-11 00:30:07.575636 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-11 00:30:07.575640 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-11 00:30:07.575643 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-11 00:30:07.575647 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-11 00:30:07.575670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-11 00:30:07.575674 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-11 00:30:07.575678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-11 00:30:07.575682 | orchestrator | 2026-04-11 00:30:07.575687 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-11 00:30:07.575690 | orchestrator | Saturday 11 April 2026 00:30:06 +0000 (0:00:08.890) 0:03:41.225 ******** 2026-04-11 00:30:07.575694 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 00:30:07.575698 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 00:30:07.575702 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 00:30:07.575706 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 00:30:07.575710 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 00:30:07.575713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 00:30:07.575717 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 00:30:07.575721 | orchestrator | 2026-04-11 00:30:07.575725 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-11 00:30:07.575729 | orchestrator | Saturday 11 April 2026 00:30:07 +0000 (0:00:00.587) 0:03:41.813 ******** 2026-04-11 00:30:07.575732 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 00:30:07.575736 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:30:07.575740 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 00:30:07.575744 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:30:07.575747 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 00:30:07.575751 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 00:30:07.575755 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:30:07.575759 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:30:07.575762 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 00:30:07.575766 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 00:30:07.575773 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 00:30:21.127571 | orchestrator | 2026-04-11 00:30:21.127663 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-11 00:30:21.127678 | orchestrator | Saturday 11 April 2026 00:30:07 +0000 (0:00:00.584) 0:03:42.397 ******** 2026-04-11 00:30:21.127690 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 00:30:21.127702 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:30:21.127715 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 00:30:21.127752 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 00:30:21.127764 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:30:21.127774 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:30:21.127785 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 00:30:21.127795 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:30:21.127806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 00:30:21.127817 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 00:30:21.127826 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 00:30:21.127838 | orchestrator | 2026-04-11 00:30:21.127850 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-11 00:30:21.127862 | orchestrator | Saturday 11 April 2026 00:30:08 +0000 (0:00:00.557) 0:03:42.955 ******** 2026-04-11 00:30:21.127872 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-11 00:30:21.127883 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-11 00:30:21.127913 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:30:21.127926 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-11 00:30:21.127938 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:30:21.127951 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:30:21.127963 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-11 00:30:21.127976 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:30:21.127988 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-11 00:30:21.128000 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-11 00:30:21.128014 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-11 00:30:21.128030 | orchestrator | 2026-04-11 00:30:21.128040 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-11 00:30:21.128051 | orchestrator | Saturday 11 April 2026 00:30:08 +0000 (0:00:00.720) 0:03:43.675 ******** 2026-04-11 00:30:21.128062 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:30:21.128073 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:30:21.128085 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:30:21.128096 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:30:21.128108 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:30:21.128116 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:30:21.128129 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:30:21.128137 | orchestrator | 2026-04-11 00:30:21.128145 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-11 00:30:21.128153 | orchestrator | Saturday 11 April 2026 00:30:09 +0000 (0:00:00.279) 0:03:43.955 ******** 2026-04-11 00:30:21.128161 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:30:21.128169 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:30:21.128176 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:30:21.128184 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:30:21.128191 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:30:21.128199 | orchestrator | ok: [testbed-manager] 2026-04-11 00:30:21.128206 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:30:21.128214 | orchestrator | 2026-04-11 00:30:21.128221 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-11 00:30:21.128229 | orchestrator | Saturday 11 April 2026 00:30:14 +0000 (0:00:05.630) 0:03:49.586 ******** 2026-04-11 00:30:21.128237 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-11 00:30:21.128253 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:30:21.128261 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-11 00:30:21.128269 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-11 00:30:21.128277 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:30:21.128284 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-11 00:30:21.128292 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:30:21.128300 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:30:21.128308 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-11 00:30:21.128377 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-11 00:30:21.128388 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:30:21.128398 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:30:21.128409 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-11 00:30:21.128420 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:30:21.128431 | orchestrator | 2026-04-11 00:30:21.128441 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-11 00:30:21.128468 | orchestrator | Saturday 11 April 2026 00:30:15 +0000 (0:00:00.277) 0:03:49.863 ******** 2026-04-11 00:30:21.128481 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-11 00:30:21.128492 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-11 00:30:21.128503 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-11 00:30:21.128532 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-11 00:30:21.128539 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-11 00:30:21.128546 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-11 00:30:21.128552 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-11 00:30:21.128559 | orchestrator | 2026-04-11 00:30:21.128566 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-11 00:30:21.128573 | orchestrator | Saturday 11 April 2026 00:30:16 +0000 (0:00:01.755) 0:03:51.618 ******** 2026-04-11 00:30:21.128581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:30:21.128590 | orchestrator | 2026-04-11 00:30:21.128597 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-11 00:30:21.128604 | orchestrator | Saturday 11 April 2026 00:30:17 +0000 (0:00:00.394) 0:03:52.013 ******** 2026-04-11 00:30:21.128610 | orchestrator | ok: [testbed-manager] 2026-04-11 00:30:21.128617 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:30:21.128624 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:30:21.128630 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:30:21.128637 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:30:21.128643 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:30:21.128650 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:30:21.128657 | orchestrator | 2026-04-11 00:30:21.128663 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-11 00:30:21.128670 | orchestrator | Saturday 11 April 2026 00:30:18 +0000 (0:00:01.418) 0:03:53.431 ******** 2026-04-11 00:30:21.128677 | orchestrator | ok: [testbed-manager] 2026-04-11 00:30:21.128683 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:30:21.128690 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:30:21.128696 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:30:21.128703 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:30:21.128709 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:30:21.128716 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:30:21.128722 | orchestrator | 2026-04-11 00:30:21.128729 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-11 00:30:21.128736 | orchestrator | Saturday 11 April 2026 00:30:19 +0000 (0:00:00.727) 0:03:54.159 ******** 2026-04-11 00:30:21.128742 | orchestrator | changed: [testbed-manager] 2026-04-11 00:30:21.128749 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:30:21.128755 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:30:21.128770 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:30:21.128776 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:30:21.128783 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:30:21.128789 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:30:21.128796 | orchestrator | 2026-04-11 00:30:21.128803 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-11 00:30:21.128809 | orchestrator | Saturday 11 April 2026 00:30:19 +0000 (0:00:00.624) 0:03:54.783 ******** 2026-04-11 00:30:21.128816 | orchestrator | ok: [testbed-manager] 2026-04-11 00:30:21.128823 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:30:21.128829 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:30:21.128836 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:30:21.128842 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:30:21.128849 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:30:21.128855 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:30:21.128862 | orchestrator | 2026-04-11 00:30:21.128869 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-11 00:30:21.128875 | orchestrator | Saturday 11 April 2026 00:30:20 +0000 (0:00:00.584) 0:03:55.368 ******** 2026-04-11 00:30:21.128892 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775865944.9274006, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:21.128902 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775865924.326533, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:21.128909 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775865954.5367205, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:21.128944 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775865966.5874832, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994733 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775865904.7331836, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994847 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775865950.7701397, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994860 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775865922.7295241, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994880 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994885 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994889 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994893 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994910 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994919 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994923 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 00:30:26.994928 | orchestrator | 2026-04-11 00:30:26.994936 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-11 00:30:26.994944 | orchestrator | Saturday 11 April 2026 00:30:21 +0000 (0:00:01.046) 0:03:56.415 ******** 2026-04-11 00:30:26.994951 | orchestrator | changed: [testbed-manager] 2026-04-11 00:30:26.994959 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:30:26.994966 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:30:26.994973 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:30:26.994979 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:30:26.994995 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:30:26.994999 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:30:26.995003 | orchestrator | 2026-04-11 00:30:26.995007 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-11 00:30:26.995016 | orchestrator | Saturday 11 April 2026 00:30:22 +0000 (0:00:01.172) 0:03:57.587 ******** 2026-04-11 00:30:26.995021 | orchestrator | changed: [testbed-manager] 2026-04-11 00:30:26.995028 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:30:26.995034 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:30:26.995040 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:30:26.995046 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:30:26.995053 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:30:26.995060 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:30:26.995066 | orchestrator | 2026-04-11 00:30:26.995073 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-11 00:30:26.995080 | orchestrator | Saturday 11 April 2026 00:30:24 +0000 (0:00:01.309) 0:03:58.896 ******** 2026-04-11 00:30:26.995087 | orchestrator | changed: [testbed-manager] 2026-04-11 00:30:26.995093 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:30:26.995099 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:30:26.995105 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:30:26.995111 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:30:26.995117 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:30:26.995123 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:30:26.995130 | orchestrator | 2026-04-11 00:30:26.995136 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-11 00:30:26.995143 | orchestrator | Saturday 11 April 2026 00:30:25 +0000 (0:00:01.448) 0:04:00.345 ******** 2026-04-11 00:30:26.995149 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:30:26.995156 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:30:26.995163 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:30:26.995169 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:30:26.995176 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:30:26.995182 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:30:26.995189 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:30:26.995200 | orchestrator | 2026-04-11 00:30:26.995207 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-11 00:30:26.995213 | orchestrator | Saturday 11 April 2026 00:30:25 +0000 (0:00:00.249) 0:04:00.594 ******** 2026-04-11 00:30:26.995220 | orchestrator | ok: [testbed-manager] 2026-04-11 00:30:26.995226 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:30:26.995230 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:30:26.995233 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:30:26.995239 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:30:26.995245 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:30:26.995250 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:30:26.995256 | orchestrator | 2026-04-11 00:30:26.995262 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-11 00:30:26.995268 | orchestrator | Saturday 11 April 2026 00:30:26 +0000 (0:00:00.799) 0:04:01.393 ******** 2026-04-11 00:30:26.995276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:30:26.995284 | orchestrator | 2026-04-11 00:30:26.995291 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-11 00:30:26.995316 | orchestrator | Saturday 11 April 2026 00:30:26 +0000 (0:00:00.385) 0:04:01.779 ******** 2026-04-11 00:31:43.081389 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:43.081494 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:31:43.081510 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:31:43.081521 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:31:43.081532 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:31:43.081542 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:31:43.081552 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:31:43.081563 | orchestrator | 2026-04-11 00:31:43.081580 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-11 00:31:43.081598 | orchestrator | Saturday 11 April 2026 00:30:35 +0000 (0:00:08.690) 0:04:10.470 ******** 2026-04-11 00:31:43.081614 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:43.081630 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:43.081647 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:43.081663 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:43.081679 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:43.081694 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:43.081709 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:43.081724 | orchestrator | 2026-04-11 00:31:43.081740 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-11 00:31:43.081755 | orchestrator | Saturday 11 April 2026 00:30:36 +0000 (0:00:01.096) 0:04:11.566 ******** 2026-04-11 00:31:43.081771 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:43.081787 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:43.081803 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:43.081819 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:43.081836 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:43.081853 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:43.081870 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:43.081889 | orchestrator | 2026-04-11 00:31:43.081907 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-11 00:31:43.081926 | orchestrator | Saturday 11 April 2026 00:30:37 +0000 (0:00:00.983) 0:04:12.549 ******** 2026-04-11 00:31:43.081938 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:43.081971 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:43.081983 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:43.081994 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:43.082006 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:43.082078 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:43.082090 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:43.082102 | orchestrator | 2026-04-11 00:31:43.082114 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-11 00:31:43.082150 | orchestrator | Saturday 11 April 2026 00:30:38 +0000 (0:00:00.295) 0:04:12.845 ******** 2026-04-11 00:31:43.082161 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:43.082171 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:43.082182 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:43.082217 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:43.082250 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:43.082262 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:43.082272 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:43.082283 | orchestrator | 2026-04-11 00:31:43.082294 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-11 00:31:43.082306 | orchestrator | Saturday 11 April 2026 00:30:38 +0000 (0:00:00.296) 0:04:13.141 ******** 2026-04-11 00:31:43.082323 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:43.082334 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:43.082345 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:43.082355 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:43.082366 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:43.082377 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:43.082387 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:43.082398 | orchestrator | 2026-04-11 00:31:43.082409 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-11 00:31:43.082420 | orchestrator | Saturday 11 April 2026 00:30:38 +0000 (0:00:00.269) 0:04:13.411 ******** 2026-04-11 00:31:43.082430 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:43.082441 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:43.082451 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:43.082462 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:43.082473 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:43.082483 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:43.082494 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:43.082504 | orchestrator | 2026-04-11 00:31:43.082515 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-11 00:31:43.082525 | orchestrator | Saturday 11 April 2026 00:30:43 +0000 (0:00:05.148) 0:04:18.559 ******** 2026-04-11 00:31:43.082538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:31:43.082551 | orchestrator | 2026-04-11 00:31:43.082562 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-11 00:31:43.082573 | orchestrator | Saturday 11 April 2026 00:30:44 +0000 (0:00:00.387) 0:04:18.946 ******** 2026-04-11 00:31:43.082584 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-11 00:31:43.082594 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-11 00:31:43.082605 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-11 00:31:43.082616 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-11 00:31:43.082627 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:31:43.082637 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-11 00:31:43.082648 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-11 00:31:43.082659 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:31:43.082669 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-11 00:31:43.082680 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-11 00:31:43.082691 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:31:43.082701 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-11 00:31:43.082712 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-11 00:31:43.082722 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:31:43.082733 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:31:43.082744 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-11 00:31:43.082785 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-11 00:31:43.082797 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:31:43.082808 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-11 00:31:43.082818 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-11 00:31:43.082829 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:31:43.082840 | orchestrator | 2026-04-11 00:31:43.082851 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-11 00:31:43.082862 | orchestrator | Saturday 11 April 2026 00:30:44 +0000 (0:00:00.334) 0:04:19.281 ******** 2026-04-11 00:31:43.082873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:31:43.082884 | orchestrator | 2026-04-11 00:31:43.082896 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-11 00:31:43.082906 | orchestrator | Saturday 11 April 2026 00:30:44 +0000 (0:00:00.456) 0:04:19.737 ******** 2026-04-11 00:31:43.082917 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-11 00:31:43.082928 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-11 00:31:43.082939 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:31:43.082950 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-11 00:31:43.082960 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:31:43.082971 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:31:43.082982 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-11 00:31:43.082992 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-11 00:31:43.083003 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:31:43.083014 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-11 00:31:43.083025 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:31:43.083036 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:31:43.083046 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-11 00:31:43.083057 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:31:43.083068 | orchestrator | 2026-04-11 00:31:43.083079 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-11 00:31:43.083090 | orchestrator | Saturday 11 April 2026 00:30:45 +0000 (0:00:00.280) 0:04:20.018 ******** 2026-04-11 00:31:43.083101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:31:43.083112 | orchestrator | 2026-04-11 00:31:43.083138 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-11 00:31:43.083150 | orchestrator | Saturday 11 April 2026 00:30:45 +0000 (0:00:00.379) 0:04:20.397 ******** 2026-04-11 00:31:43.083160 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:31:43.083171 | orchestrator | changed: [testbed-manager] 2026-04-11 00:31:43.083182 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:31:43.083193 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:31:43.083204 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:31:43.083214 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:31:43.083264 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:31:43.083275 | orchestrator | 2026-04-11 00:31:43.083286 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-11 00:31:43.083297 | orchestrator | Saturday 11 April 2026 00:31:17 +0000 (0:00:31.936) 0:04:52.333 ******** 2026-04-11 00:31:43.083308 | orchestrator | changed: [testbed-manager] 2026-04-11 00:31:43.083319 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:31:43.083330 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:31:43.083340 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:31:43.083359 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:31:43.083369 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:31:43.083380 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:31:43.083391 | orchestrator | 2026-04-11 00:31:43.083402 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-11 00:31:43.083413 | orchestrator | Saturday 11 April 2026 00:31:26 +0000 (0:00:09.279) 0:05:01.613 ******** 2026-04-11 00:31:43.083423 | orchestrator | changed: [testbed-manager] 2026-04-11 00:31:43.083434 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:31:43.083445 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:31:43.083455 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:31:43.083466 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:31:43.083477 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:31:43.083487 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:31:43.083498 | orchestrator | 2026-04-11 00:31:43.083509 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-11 00:31:43.083520 | orchestrator | Saturday 11 April 2026 00:31:35 +0000 (0:00:08.211) 0:05:09.825 ******** 2026-04-11 00:31:43.083530 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:43.083541 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:43.083552 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:43.083563 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:43.083574 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:43.083585 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:43.083595 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:43.083606 | orchestrator | 2026-04-11 00:31:43.083617 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-11 00:31:43.083628 | orchestrator | Saturday 11 April 2026 00:31:36 +0000 (0:00:01.671) 0:05:11.496 ******** 2026-04-11 00:31:43.083639 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:31:43.083650 | orchestrator | changed: [testbed-manager] 2026-04-11 00:31:43.083661 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:31:43.083672 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:31:43.083682 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:31:43.083693 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:31:43.083704 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:31:43.083715 | orchestrator | 2026-04-11 00:31:43.083732 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-11 00:31:54.526974 | orchestrator | Saturday 11 April 2026 00:31:43 +0000 (0:00:06.361) 0:05:17.858 ******** 2026-04-11 00:31:54.527070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:31:54.527082 | orchestrator | 2026-04-11 00:31:54.527091 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-11 00:31:54.527099 | orchestrator | Saturday 11 April 2026 00:31:43 +0000 (0:00:00.385) 0:05:18.243 ******** 2026-04-11 00:31:54.527104 | orchestrator | changed: [testbed-manager] 2026-04-11 00:31:54.527109 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:31:54.527113 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:31:54.527117 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:31:54.527121 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:31:54.527125 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:31:54.527129 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:31:54.527133 | orchestrator | 2026-04-11 00:31:54.527138 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-11 00:31:54.527142 | orchestrator | Saturday 11 April 2026 00:31:44 +0000 (0:00:00.727) 0:05:18.971 ******** 2026-04-11 00:31:54.527146 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:54.527151 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:54.527155 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:54.527159 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:54.527162 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:54.527183 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:54.527187 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:54.527191 | orchestrator | 2026-04-11 00:31:54.527195 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-11 00:31:54.527200 | orchestrator | Saturday 11 April 2026 00:31:45 +0000 (0:00:01.707) 0:05:20.679 ******** 2026-04-11 00:31:54.527239 | orchestrator | changed: [testbed-manager] 2026-04-11 00:31:54.527243 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:31:54.527247 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:31:54.527251 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:31:54.527255 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:31:54.527259 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:31:54.527262 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:31:54.527266 | orchestrator | 2026-04-11 00:31:54.527270 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-11 00:31:54.527274 | orchestrator | Saturday 11 April 2026 00:31:46 +0000 (0:00:00.735) 0:05:21.414 ******** 2026-04-11 00:31:54.527278 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:31:54.527282 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:31:54.527285 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:31:54.527289 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:31:54.527293 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:31:54.527297 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:31:54.527313 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:31:54.527317 | orchestrator | 2026-04-11 00:31:54.527320 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-11 00:31:54.527324 | orchestrator | Saturday 11 April 2026 00:31:46 +0000 (0:00:00.246) 0:05:21.661 ******** 2026-04-11 00:31:54.527328 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:31:54.527332 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:31:54.527335 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:31:54.527339 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:31:54.527343 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:31:54.527347 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:31:54.527350 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:31:54.527354 | orchestrator | 2026-04-11 00:31:54.527358 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-11 00:31:54.527362 | orchestrator | Saturday 11 April 2026 00:31:47 +0000 (0:00:00.373) 0:05:22.034 ******** 2026-04-11 00:31:54.527366 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:54.527370 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:54.527373 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:54.527377 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:54.527381 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:54.527385 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:54.527388 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:54.527392 | orchestrator | 2026-04-11 00:31:54.527396 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-11 00:31:54.527400 | orchestrator | Saturday 11 April 2026 00:31:47 +0000 (0:00:00.415) 0:05:22.449 ******** 2026-04-11 00:31:54.527404 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:31:54.527407 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:31:54.527411 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:31:54.527415 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:31:54.527421 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:31:54.527427 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:31:54.527433 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:31:54.527439 | orchestrator | 2026-04-11 00:31:54.527446 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-11 00:31:54.527454 | orchestrator | Saturday 11 April 2026 00:31:47 +0000 (0:00:00.296) 0:05:22.746 ******** 2026-04-11 00:31:54.527460 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:54.527467 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:54.527474 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:54.527486 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:54.527492 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:54.527499 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:54.527504 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:54.527508 | orchestrator | 2026-04-11 00:31:54.527512 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-11 00:31:54.527516 | orchestrator | Saturday 11 April 2026 00:31:48 +0000 (0:00:00.337) 0:05:23.084 ******** 2026-04-11 00:31:54.527520 | orchestrator | ok: [testbed-manager] =>  2026-04-11 00:31:54.527524 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 00:31:54.527528 | orchestrator | ok: [testbed-node-0] =>  2026-04-11 00:31:54.527532 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 00:31:54.527537 | orchestrator | ok: [testbed-node-1] =>  2026-04-11 00:31:54.527541 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 00:31:54.527545 | orchestrator | ok: [testbed-node-2] =>  2026-04-11 00:31:54.527550 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 00:31:54.527564 | orchestrator | ok: [testbed-node-3] =>  2026-04-11 00:31:54.527569 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 00:31:54.527573 | orchestrator | ok: [testbed-node-4] =>  2026-04-11 00:31:54.527578 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 00:31:54.527582 | orchestrator | ok: [testbed-node-5] =>  2026-04-11 00:31:54.527586 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 00:31:54.527591 | orchestrator | 2026-04-11 00:31:54.527595 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-11 00:31:54.527600 | orchestrator | Saturday 11 April 2026 00:31:48 +0000 (0:00:00.261) 0:05:23.346 ******** 2026-04-11 00:31:54.527604 | orchestrator | ok: [testbed-manager] =>  2026-04-11 00:31:54.527608 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 00:31:54.527613 | orchestrator | ok: [testbed-node-0] =>  2026-04-11 00:31:54.527617 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 00:31:54.527621 | orchestrator | ok: [testbed-node-1] =>  2026-04-11 00:31:54.527625 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 00:31:54.527630 | orchestrator | ok: [testbed-node-2] =>  2026-04-11 00:31:54.527634 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 00:31:54.527638 | orchestrator | ok: [testbed-node-3] =>  2026-04-11 00:31:54.527642 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 00:31:54.527647 | orchestrator | ok: [testbed-node-4] =>  2026-04-11 00:31:54.527651 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 00:31:54.527655 | orchestrator | ok: [testbed-node-5] =>  2026-04-11 00:31:54.527659 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 00:31:54.527664 | orchestrator | 2026-04-11 00:31:54.527668 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-11 00:31:54.527673 | orchestrator | Saturday 11 April 2026 00:31:48 +0000 (0:00:00.255) 0:05:23.601 ******** 2026-04-11 00:31:54.527677 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:31:54.527681 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:31:54.527685 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:31:54.527689 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:31:54.527694 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:31:54.527698 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:31:54.527702 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:31:54.527706 | orchestrator | 2026-04-11 00:31:54.527710 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-11 00:31:54.527715 | orchestrator | Saturday 11 April 2026 00:31:49 +0000 (0:00:00.242) 0:05:23.843 ******** 2026-04-11 00:31:54.527719 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:31:54.527723 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:31:54.527727 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:31:54.527731 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:31:54.527736 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:31:54.527740 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:31:54.527744 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:31:54.527748 | orchestrator | 2026-04-11 00:31:54.527757 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-11 00:31:54.527761 | orchestrator | Saturday 11 April 2026 00:31:49 +0000 (0:00:00.303) 0:05:24.146 ******** 2026-04-11 00:31:54.527770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:31:54.527776 | orchestrator | 2026-04-11 00:31:54.527780 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-11 00:31:54.527784 | orchestrator | Saturday 11 April 2026 00:31:49 +0000 (0:00:00.386) 0:05:24.533 ******** 2026-04-11 00:31:54.527788 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:54.527794 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:54.527800 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:54.527807 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:54.527814 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:54.527821 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:54.527828 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:54.527835 | orchestrator | 2026-04-11 00:31:54.527843 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-11 00:31:54.527850 | orchestrator | Saturday 11 April 2026 00:31:50 +0000 (0:00:00.824) 0:05:25.357 ******** 2026-04-11 00:31:54.527857 | orchestrator | ok: [testbed-manager] 2026-04-11 00:31:54.527864 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:31:54.527871 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:31:54.527879 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:31:54.527886 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:31:54.527894 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:31:54.527901 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:31:54.527908 | orchestrator | 2026-04-11 00:31:54.527915 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-11 00:31:54.527923 | orchestrator | Saturday 11 April 2026 00:31:54 +0000 (0:00:03.607) 0:05:28.964 ******** 2026-04-11 00:31:54.527931 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-11 00:31:54.527938 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-11 00:31:54.527945 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-11 00:31:54.527952 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-11 00:31:54.527959 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-11 00:31:54.527967 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-11 00:31:54.527974 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:31:54.527981 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-11 00:31:54.527988 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-11 00:31:54.527995 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-11 00:31:54.528002 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:31:54.528009 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-11 00:31:54.528016 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-11 00:31:54.528023 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-11 00:31:54.528030 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:31:54.528037 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-11 00:31:54.528048 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-11 00:32:57.894816 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-11 00:32:57.894964 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:32:57.894994 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-11 00:32:57.895007 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-11 00:32:57.895019 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-11 00:32:57.895030 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:32:57.895067 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:32:57.895078 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-11 00:32:57.895089 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-11 00:32:57.895177 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-11 00:32:57.895189 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:32:57.895200 | orchestrator | 2026-04-11 00:32:57.895214 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-11 00:32:57.895231 | orchestrator | Saturday 11 April 2026 00:31:54 +0000 (0:00:00.571) 0:05:29.535 ******** 2026-04-11 00:32:57.895250 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.895269 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.895287 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.895303 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.895314 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.895325 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.895339 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.895351 | orchestrator | 2026-04-11 00:32:57.895364 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-11 00:32:57.895377 | orchestrator | Saturday 11 April 2026 00:32:02 +0000 (0:00:07.606) 0:05:37.142 ******** 2026-04-11 00:32:57.895389 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.895402 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.895414 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.895427 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.895439 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.895452 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.895464 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.895476 | orchestrator | 2026-04-11 00:32:57.895489 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-11 00:32:57.895509 | orchestrator | Saturday 11 April 2026 00:32:03 +0000 (0:00:01.032) 0:05:38.175 ******** 2026-04-11 00:32:57.895528 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.895546 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.895564 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.895582 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.895600 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.895618 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.895635 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.895654 | orchestrator | 2026-04-11 00:32:57.895674 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-11 00:32:57.895694 | orchestrator | Saturday 11 April 2026 00:32:12 +0000 (0:00:08.958) 0:05:47.133 ******** 2026-04-11 00:32:57.895712 | orchestrator | changed: [testbed-manager] 2026-04-11 00:32:57.895729 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.895746 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.895762 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.895779 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.895797 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.895814 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.895832 | orchestrator | 2026-04-11 00:32:57.895850 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-11 00:32:57.895868 | orchestrator | Saturday 11 April 2026 00:32:15 +0000 (0:00:03.228) 0:05:50.361 ******** 2026-04-11 00:32:57.895885 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.895903 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.895921 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.895938 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.895956 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.895975 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.895992 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.896009 | orchestrator | 2026-04-11 00:32:57.896026 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-11 00:32:57.896043 | orchestrator | Saturday 11 April 2026 00:32:16 +0000 (0:00:01.293) 0:05:51.654 ******** 2026-04-11 00:32:57.896077 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.896172 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.896197 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.896215 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.896234 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.896252 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.896270 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.896288 | orchestrator | 2026-04-11 00:32:57.896304 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-11 00:32:57.896322 | orchestrator | Saturday 11 April 2026 00:32:18 +0000 (0:00:01.296) 0:05:52.951 ******** 2026-04-11 00:32:57.896340 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:32:57.896357 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:32:57.896374 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:32:57.896392 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:32:57.896410 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:32:57.896427 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:32:57.896445 | orchestrator | changed: [testbed-manager] 2026-04-11 00:32:57.896460 | orchestrator | 2026-04-11 00:32:57.896541 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-11 00:32:57.896564 | orchestrator | Saturday 11 April 2026 00:32:18 +0000 (0:00:00.644) 0:05:53.595 ******** 2026-04-11 00:32:57.896580 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.896597 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.896613 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.896629 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.896646 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.896661 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.896676 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.896694 | orchestrator | 2026-04-11 00:32:57.896712 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-11 00:32:57.896763 | orchestrator | Saturday 11 April 2026 00:32:28 +0000 (0:00:09.955) 0:06:03.550 ******** 2026-04-11 00:32:57.896784 | orchestrator | changed: [testbed-manager] 2026-04-11 00:32:57.896800 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.896816 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.896833 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.896850 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.896867 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.896884 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.896902 | orchestrator | 2026-04-11 00:32:57.896921 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-11 00:32:57.896940 | orchestrator | Saturday 11 April 2026 00:32:29 +0000 (0:00:01.158) 0:06:04.709 ******** 2026-04-11 00:32:57.896960 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.896979 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.896998 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.897017 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.897034 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.897053 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.897070 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.897089 | orchestrator | 2026-04-11 00:32:57.897137 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-11 00:32:57.897157 | orchestrator | Saturday 11 April 2026 00:32:39 +0000 (0:00:09.811) 0:06:14.520 ******** 2026-04-11 00:32:57.897177 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.897196 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.897215 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.897235 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.897254 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.897272 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.897283 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.897309 | orchestrator | 2026-04-11 00:32:57.897321 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-11 00:32:57.897332 | orchestrator | Saturday 11 April 2026 00:32:51 +0000 (0:00:11.308) 0:06:25.829 ******** 2026-04-11 00:32:57.897343 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-11 00:32:57.897355 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-11 00:32:57.897366 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-11 00:32:57.897377 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-11 00:32:57.897387 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-11 00:32:57.897398 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-11 00:32:57.897409 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-11 00:32:57.897420 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-11 00:32:57.897431 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-11 00:32:57.897442 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-11 00:32:57.897452 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-11 00:32:57.897471 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-11 00:32:57.897482 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-11 00:32:57.897493 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-11 00:32:57.897504 | orchestrator | 2026-04-11 00:32:57.897515 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-11 00:32:57.897526 | orchestrator | Saturday 11 April 2026 00:32:52 +0000 (0:00:01.203) 0:06:27.032 ******** 2026-04-11 00:32:57.897536 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:32:57.897547 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:32:57.897558 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:32:57.897569 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:32:57.897580 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:32:57.897590 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:32:57.897601 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:32:57.897612 | orchestrator | 2026-04-11 00:32:57.897623 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-11 00:32:57.897634 | orchestrator | Saturday 11 April 2026 00:32:52 +0000 (0:00:00.640) 0:06:27.672 ******** 2026-04-11 00:32:57.897645 | orchestrator | ok: [testbed-manager] 2026-04-11 00:32:57.897656 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:32:57.897667 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:32:57.897678 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:32:57.897689 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:32:57.897700 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:32:57.897711 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:32:57.897722 | orchestrator | 2026-04-11 00:32:57.897733 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-11 00:32:57.897745 | orchestrator | Saturday 11 April 2026 00:32:57 +0000 (0:00:04.218) 0:06:31.891 ******** 2026-04-11 00:32:57.897756 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:32:57.897767 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:32:57.897778 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:32:57.897789 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:32:57.897799 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:32:57.897810 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:32:57.897821 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:32:57.897832 | orchestrator | 2026-04-11 00:32:57.897844 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-11 00:32:57.897855 | orchestrator | Saturday 11 April 2026 00:32:57 +0000 (0:00:00.505) 0:06:32.397 ******** 2026-04-11 00:32:57.897866 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-11 00:32:57.897877 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-11 00:32:57.897895 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:32:57.897906 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-11 00:32:57.897917 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-11 00:32:57.897927 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:32:57.897938 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-11 00:32:57.897949 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-11 00:32:57.897960 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:32:57.897983 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-11 00:33:17.170570 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-11 00:33:17.170658 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:33:17.170667 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-11 00:33:17.170673 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-11 00:33:17.170679 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:33:17.170684 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-11 00:33:17.170690 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-11 00:33:17.170695 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:33:17.170701 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-11 00:33:17.170706 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-11 00:33:17.170711 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:33:17.170717 | orchestrator | 2026-04-11 00:33:17.170725 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-11 00:33:17.170731 | orchestrator | Saturday 11 April 2026 00:32:58 +0000 (0:00:00.556) 0:06:32.953 ******** 2026-04-11 00:33:17.170737 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:17.170742 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:33:17.170747 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:33:17.170752 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:33:17.170757 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:33:17.170763 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:33:17.170768 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:33:17.170773 | orchestrator | 2026-04-11 00:33:17.170779 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-11 00:33:17.170785 | orchestrator | Saturday 11 April 2026 00:32:58 +0000 (0:00:00.497) 0:06:33.450 ******** 2026-04-11 00:33:17.170790 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:17.170796 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:33:17.170801 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:33:17.170806 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:33:17.170811 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:33:17.170817 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:33:17.170822 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:33:17.170831 | orchestrator | 2026-04-11 00:33:17.170839 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-11 00:33:17.170848 | orchestrator | Saturday 11 April 2026 00:32:59 +0000 (0:00:00.633) 0:06:34.084 ******** 2026-04-11 00:33:17.170856 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:17.170868 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:33:17.170880 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:33:17.170887 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:33:17.170896 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:33:17.170904 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:33:17.170913 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:33:17.170921 | orchestrator | 2026-04-11 00:33:17.170929 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-11 00:33:17.170937 | orchestrator | Saturday 11 April 2026 00:32:59 +0000 (0:00:00.531) 0:06:34.616 ******** 2026-04-11 00:33:17.170946 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.170980 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:17.170987 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:17.170992 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:17.170997 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:17.171002 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:17.171008 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:17.171014 | orchestrator | 2026-04-11 00:33:17.171020 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-11 00:33:17.171026 | orchestrator | Saturday 11 April 2026 00:33:01 +0000 (0:00:01.815) 0:06:36.431 ******** 2026-04-11 00:33:17.171033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:33:17.171041 | orchestrator | 2026-04-11 00:33:17.171047 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-11 00:33:17.171053 | orchestrator | Saturday 11 April 2026 00:33:02 +0000 (0:00:00.842) 0:06:37.273 ******** 2026-04-11 00:33:17.171089 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.171095 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:17.171101 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:17.171107 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:17.171113 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:17.171121 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:17.171127 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:17.171134 | orchestrator | 2026-04-11 00:33:17.171140 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-11 00:33:17.171147 | orchestrator | Saturday 11 April 2026 00:33:03 +0000 (0:00:01.038) 0:06:38.312 ******** 2026-04-11 00:33:17.171153 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.171160 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:17.171166 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:17.171173 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:17.171179 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:17.171186 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:17.171192 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:17.171199 | orchestrator | 2026-04-11 00:33:17.171206 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-11 00:33:17.171212 | orchestrator | Saturday 11 April 2026 00:33:04 +0000 (0:00:00.887) 0:06:39.199 ******** 2026-04-11 00:33:17.171219 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.171226 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:17.171232 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:17.171239 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:17.171245 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:17.171253 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:17.171259 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:17.171266 | orchestrator | 2026-04-11 00:33:17.171272 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-11 00:33:17.171293 | orchestrator | Saturday 11 April 2026 00:33:05 +0000 (0:00:01.453) 0:06:40.653 ******** 2026-04-11 00:33:17.171300 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:17.171306 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:17.171313 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:17.171319 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:17.171326 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:17.171333 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:17.171339 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:17.171345 | orchestrator | 2026-04-11 00:33:17.171352 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-11 00:33:17.171359 | orchestrator | Saturday 11 April 2026 00:33:07 +0000 (0:00:01.344) 0:06:41.997 ******** 2026-04-11 00:33:17.171366 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.171372 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:17.171379 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:17.171391 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:17.171398 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:17.171405 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:17.171412 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:17.171418 | orchestrator | 2026-04-11 00:33:17.171425 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-11 00:33:17.171431 | orchestrator | Saturday 11 April 2026 00:33:08 +0000 (0:00:01.280) 0:06:43.278 ******** 2026-04-11 00:33:17.171438 | orchestrator | changed: [testbed-manager] 2026-04-11 00:33:17.171443 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:17.171449 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:17.171455 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:17.171460 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:17.171466 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:17.171471 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:17.171477 | orchestrator | 2026-04-11 00:33:17.171483 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-11 00:33:17.171489 | orchestrator | Saturday 11 April 2026 00:33:10 +0000 (0:00:01.552) 0:06:44.830 ******** 2026-04-11 00:33:17.171495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:33:17.171501 | orchestrator | 2026-04-11 00:33:17.171506 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-11 00:33:17.171512 | orchestrator | Saturday 11 April 2026 00:33:10 +0000 (0:00:00.817) 0:06:45.648 ******** 2026-04-11 00:33:17.171518 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.171524 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:17.171529 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:17.171535 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:17.171541 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:17.171546 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:17.171552 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:17.171558 | orchestrator | 2026-04-11 00:33:17.171576 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-11 00:33:17.171582 | orchestrator | Saturday 11 April 2026 00:33:12 +0000 (0:00:01.332) 0:06:46.980 ******** 2026-04-11 00:33:17.171588 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.171593 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:17.171599 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:17.171604 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:17.171610 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:17.171616 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:17.171621 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:17.171627 | orchestrator | 2026-04-11 00:33:17.171633 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-11 00:33:17.171638 | orchestrator | Saturday 11 April 2026 00:33:13 +0000 (0:00:01.310) 0:06:48.291 ******** 2026-04-11 00:33:17.171644 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.171649 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:17.171655 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:17.171661 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:17.171666 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:17.171672 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:17.171678 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:17.171683 | orchestrator | 2026-04-11 00:33:17.171689 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-11 00:33:17.171695 | orchestrator | Saturday 11 April 2026 00:33:14 +0000 (0:00:01.184) 0:06:49.476 ******** 2026-04-11 00:33:17.171700 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:17.171706 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:17.171712 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:17.171717 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:17.171723 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:17.171732 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:17.171738 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:17.171744 | orchestrator | 2026-04-11 00:33:17.171750 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-11 00:33:17.171755 | orchestrator | Saturday 11 April 2026 00:33:16 +0000 (0:00:01.365) 0:06:50.841 ******** 2026-04-11 00:33:17.171761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:33:17.171767 | orchestrator | 2026-04-11 00:33:17.171773 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 00:33:17.171778 | orchestrator | Saturday 11 April 2026 00:33:16 +0000 (0:00:00.849) 0:06:51.691 ******** 2026-04-11 00:33:17.171784 | orchestrator | 2026-04-11 00:33:17.171790 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 00:33:17.171796 | orchestrator | Saturday 11 April 2026 00:33:16 +0000 (0:00:00.039) 0:06:51.730 ******** 2026-04-11 00:33:17.171801 | orchestrator | 2026-04-11 00:33:17.171807 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 00:33:17.171813 | orchestrator | Saturday 11 April 2026 00:33:17 +0000 (0:00:00.183) 0:06:51.913 ******** 2026-04-11 00:33:17.171818 | orchestrator | 2026-04-11 00:33:17.171824 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 00:33:17.171833 | orchestrator | Saturday 11 April 2026 00:33:17 +0000 (0:00:00.039) 0:06:51.952 ******** 2026-04-11 00:33:44.531131 | orchestrator | 2026-04-11 00:33:44.531233 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 00:33:44.531249 | orchestrator | Saturday 11 April 2026 00:33:17 +0000 (0:00:00.039) 0:06:51.991 ******** 2026-04-11 00:33:44.531262 | orchestrator | 2026-04-11 00:33:44.531274 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 00:33:44.531286 | orchestrator | Saturday 11 April 2026 00:33:17 +0000 (0:00:00.044) 0:06:52.036 ******** 2026-04-11 00:33:44.531297 | orchestrator | 2026-04-11 00:33:44.531309 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 00:33:44.531321 | orchestrator | Saturday 11 April 2026 00:33:17 +0000 (0:00:00.040) 0:06:52.076 ******** 2026-04-11 00:33:44.531332 | orchestrator | 2026-04-11 00:33:44.531344 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-11 00:33:44.531355 | orchestrator | Saturday 11 April 2026 00:33:17 +0000 (0:00:00.039) 0:06:52.116 ******** 2026-04-11 00:33:44.531367 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:44.531379 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:44.531391 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:44.531402 | orchestrator | 2026-04-11 00:33:44.531413 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-11 00:33:44.531425 | orchestrator | Saturday 11 April 2026 00:33:18 +0000 (0:00:01.260) 0:06:53.376 ******** 2026-04-11 00:33:44.531436 | orchestrator | changed: [testbed-manager] 2026-04-11 00:33:44.531449 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:44.531460 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:44.531472 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:44.531483 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:44.531494 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:44.531505 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:44.531517 | orchestrator | 2026-04-11 00:33:44.531528 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-11 00:33:44.531540 | orchestrator | Saturday 11 April 2026 00:33:20 +0000 (0:00:01.596) 0:06:54.972 ******** 2026-04-11 00:33:44.531551 | orchestrator | changed: [testbed-manager] 2026-04-11 00:33:44.531563 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:44.531574 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:44.531585 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:44.531597 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:44.531633 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:44.531646 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:44.531657 | orchestrator | 2026-04-11 00:33:44.531669 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-11 00:33:44.531681 | orchestrator | Saturday 11 April 2026 00:33:21 +0000 (0:00:01.175) 0:06:56.148 ******** 2026-04-11 00:33:44.531693 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:44.531705 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:44.531717 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:44.531743 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:44.531756 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:44.531767 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:44.531779 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:44.531791 | orchestrator | 2026-04-11 00:33:44.531803 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-11 00:33:44.531815 | orchestrator | Saturday 11 April 2026 00:33:23 +0000 (0:00:02.465) 0:06:58.613 ******** 2026-04-11 00:33:44.531827 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:33:44.531839 | orchestrator | 2026-04-11 00:33:44.531851 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-11 00:33:44.531862 | orchestrator | Saturday 11 April 2026 00:33:23 +0000 (0:00:00.103) 0:06:58.716 ******** 2026-04-11 00:33:44.531874 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:44.531885 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:44.531897 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:33:44.531909 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:44.531921 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:44.531932 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:44.531944 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:44.531955 | orchestrator | 2026-04-11 00:33:44.531967 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-11 00:33:44.531979 | orchestrator | Saturday 11 April 2026 00:33:25 +0000 (0:00:01.189) 0:06:59.906 ******** 2026-04-11 00:33:44.531991 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:44.532050 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:33:44.532062 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:33:44.532074 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:33:44.532086 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:33:44.532097 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:33:44.532109 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:33:44.532120 | orchestrator | 2026-04-11 00:33:44.532131 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-11 00:33:44.532143 | orchestrator | Saturday 11 April 2026 00:33:25 +0000 (0:00:00.497) 0:07:00.403 ******** 2026-04-11 00:33:44.532155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:33:44.532168 | orchestrator | 2026-04-11 00:33:44.532179 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-11 00:33:44.532190 | orchestrator | Saturday 11 April 2026 00:33:26 +0000 (0:00:00.865) 0:07:01.269 ******** 2026-04-11 00:33:44.532201 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:44.532213 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:44.532220 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:44.532227 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:44.532233 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:44.532239 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:44.532245 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:44.532251 | orchestrator | 2026-04-11 00:33:44.532257 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-11 00:33:44.532264 | orchestrator | Saturday 11 April 2026 00:33:27 +0000 (0:00:01.000) 0:07:02.269 ******** 2026-04-11 00:33:44.532270 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-11 00:33:44.532297 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-11 00:33:44.532305 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-11 00:33:44.532311 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-11 00:33:44.532317 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-11 00:33:44.532324 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-11 00:33:44.532334 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-11 00:33:44.532345 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-11 00:33:44.532355 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-11 00:33:44.532365 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-11 00:33:44.532376 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-11 00:33:44.532388 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-11 00:33:44.532398 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-11 00:33:44.532409 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-11 00:33:44.532419 | orchestrator | 2026-04-11 00:33:44.532430 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-11 00:33:44.532441 | orchestrator | Saturday 11 April 2026 00:33:30 +0000 (0:00:02.608) 0:07:04.878 ******** 2026-04-11 00:33:44.532451 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:44.532462 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:33:44.532472 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:33:44.532481 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:33:44.532492 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:33:44.532501 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:33:44.532510 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:33:44.532519 | orchestrator | 2026-04-11 00:33:44.532530 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-11 00:33:44.532539 | orchestrator | Saturday 11 April 2026 00:33:30 +0000 (0:00:00.465) 0:07:05.343 ******** 2026-04-11 00:33:44.532551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:33:44.532563 | orchestrator | 2026-04-11 00:33:44.532573 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-11 00:33:44.532583 | orchestrator | Saturday 11 April 2026 00:33:31 +0000 (0:00:00.901) 0:07:06.245 ******** 2026-04-11 00:33:44.532593 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:44.532604 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:44.532615 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:44.532625 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:44.532635 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:44.532645 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:44.532655 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:44.532679 | orchestrator | 2026-04-11 00:33:44.532699 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-11 00:33:44.532713 | orchestrator | Saturday 11 April 2026 00:33:32 +0000 (0:00:00.845) 0:07:07.090 ******** 2026-04-11 00:33:44.532723 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:44.532734 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:44.532741 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:44.532747 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:44.532753 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:44.532760 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:44.532766 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:44.532772 | orchestrator | 2026-04-11 00:33:44.532778 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-11 00:33:44.532785 | orchestrator | Saturday 11 April 2026 00:33:33 +0000 (0:00:00.798) 0:07:07.889 ******** 2026-04-11 00:33:44.532800 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:44.532807 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:33:44.532813 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:33:44.532819 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:33:44.532825 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:33:44.532831 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:33:44.532837 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:33:44.532844 | orchestrator | 2026-04-11 00:33:44.532850 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-11 00:33:44.532856 | orchestrator | Saturday 11 April 2026 00:33:33 +0000 (0:00:00.477) 0:07:08.366 ******** 2026-04-11 00:33:44.532863 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:44.532869 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:33:44.532875 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:33:44.532881 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:33:44.532887 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:33:44.532893 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:33:44.532907 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:33:44.532913 | orchestrator | 2026-04-11 00:33:44.532919 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-11 00:33:44.532925 | orchestrator | Saturday 11 April 2026 00:33:35 +0000 (0:00:01.466) 0:07:09.833 ******** 2026-04-11 00:33:44.532931 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:33:44.532938 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:33:44.532944 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:33:44.532950 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:33:44.532956 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:33:44.532962 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:33:44.532968 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:33:44.532974 | orchestrator | 2026-04-11 00:33:44.532981 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-11 00:33:44.532987 | orchestrator | Saturday 11 April 2026 00:33:35 +0000 (0:00:00.655) 0:07:10.488 ******** 2026-04-11 00:33:44.532993 | orchestrator | ok: [testbed-manager] 2026-04-11 00:33:44.533030 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:33:44.533038 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:33:44.533045 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:33:44.533051 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:33:44.533057 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:33:44.533071 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:17.185715 | orchestrator | 2026-04-11 00:34:17.185833 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-11 00:34:17.185878 | orchestrator | Saturday 11 April 2026 00:33:44 +0000 (0:00:08.888) 0:07:19.377 ******** 2026-04-11 00:34:17.185896 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.185989 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:17.186001 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:17.186014 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:17.186104 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:17.186118 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:17.186133 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:17.186148 | orchestrator | 2026-04-11 00:34:17.186164 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-11 00:34:17.186180 | orchestrator | Saturday 11 April 2026 00:33:45 +0000 (0:00:01.359) 0:07:20.736 ******** 2026-04-11 00:34:17.186193 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.186207 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:17.186221 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:17.186236 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:17.186251 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:17.186266 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:17.186283 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:17.186298 | orchestrator | 2026-04-11 00:34:17.186313 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-11 00:34:17.186360 | orchestrator | Saturday 11 April 2026 00:33:47 +0000 (0:00:01.620) 0:07:22.357 ******** 2026-04-11 00:34:17.186377 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.186393 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:17.186406 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:17.186417 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:17.186427 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:17.186437 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:17.186446 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:17.186456 | orchestrator | 2026-04-11 00:34:17.186466 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 00:34:17.186476 | orchestrator | Saturday 11 April 2026 00:33:49 +0000 (0:00:01.860) 0:07:24.217 ******** 2026-04-11 00:34:17.186485 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.186496 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:17.186505 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:17.186515 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:17.186525 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:17.186535 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:17.186545 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:17.186554 | orchestrator | 2026-04-11 00:34:17.186564 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 00:34:17.186574 | orchestrator | Saturday 11 April 2026 00:33:50 +0000 (0:00:00.916) 0:07:25.134 ******** 2026-04-11 00:34:17.186585 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:34:17.186595 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:34:17.186613 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:34:17.186624 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:34:17.186634 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:34:17.186643 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:34:17.186652 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:34:17.186660 | orchestrator | 2026-04-11 00:34:17.186669 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-11 00:34:17.186678 | orchestrator | Saturday 11 April 2026 00:33:51 +0000 (0:00:00.787) 0:07:25.921 ******** 2026-04-11 00:34:17.186686 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:34:17.186695 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:34:17.186703 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:34:17.186712 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:34:17.186720 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:34:17.186729 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:34:17.186737 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:34:17.186746 | orchestrator | 2026-04-11 00:34:17.186755 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-11 00:34:17.186763 | orchestrator | Saturday 11 April 2026 00:33:51 +0000 (0:00:00.680) 0:07:26.602 ******** 2026-04-11 00:34:17.186772 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.186781 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:17.186789 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:17.186798 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:17.186806 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:17.186815 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:17.186824 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:17.186833 | orchestrator | 2026-04-11 00:34:17.186841 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-11 00:34:17.186850 | orchestrator | Saturday 11 April 2026 00:33:52 +0000 (0:00:00.498) 0:07:27.100 ******** 2026-04-11 00:34:17.186859 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.186867 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:17.186876 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:17.186884 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:17.186892 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:17.186901 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:17.186941 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:17.186973 | orchestrator | 2026-04-11 00:34:17.186991 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-11 00:34:17.187006 | orchestrator | Saturday 11 April 2026 00:33:52 +0000 (0:00:00.531) 0:07:27.631 ******** 2026-04-11 00:34:17.187020 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.187032 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:17.187044 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:17.187056 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:17.187068 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:17.187082 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:17.187095 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:17.187108 | orchestrator | 2026-04-11 00:34:17.187121 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-11 00:34:17.187133 | orchestrator | Saturday 11 April 2026 00:33:53 +0000 (0:00:00.488) 0:07:28.120 ******** 2026-04-11 00:34:17.187146 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.187160 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:17.187173 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:17.187187 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:17.187201 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:17.187214 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:17.187229 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:17.187242 | orchestrator | 2026-04-11 00:34:17.187282 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-11 00:34:17.187297 | orchestrator | Saturday 11 April 2026 00:33:59 +0000 (0:00:05.990) 0:07:34.111 ******** 2026-04-11 00:34:17.187310 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:34:17.187324 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:34:17.187339 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:34:17.187354 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:34:17.187370 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:34:17.187385 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:34:17.187398 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:34:17.187412 | orchestrator | 2026-04-11 00:34:17.187426 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-11 00:34:17.187441 | orchestrator | Saturday 11 April 2026 00:34:00 +0000 (0:00:00.696) 0:07:34.807 ******** 2026-04-11 00:34:17.187459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:34:17.187476 | orchestrator | 2026-04-11 00:34:17.187493 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-11 00:34:17.187502 | orchestrator | Saturday 11 April 2026 00:34:00 +0000 (0:00:00.778) 0:07:35.585 ******** 2026-04-11 00:34:17.187511 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.187519 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:17.187528 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:17.187537 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:17.187545 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:17.187554 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:17.187563 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:17.187571 | orchestrator | 2026-04-11 00:34:17.187580 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-11 00:34:17.187589 | orchestrator | Saturday 11 April 2026 00:34:02 +0000 (0:00:01.998) 0:07:37.584 ******** 2026-04-11 00:34:17.187598 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.187606 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:17.187615 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:17.187625 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:17.187639 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:17.187658 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:17.187679 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:17.187691 | orchestrator | 2026-04-11 00:34:17.187704 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-11 00:34:17.187729 | orchestrator | Saturday 11 April 2026 00:34:03 +0000 (0:00:01.200) 0:07:38.784 ******** 2026-04-11 00:34:17.187742 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:17.187754 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:17.187767 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:17.187780 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:17.187793 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:17.187814 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:17.187828 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:17.187843 | orchestrator | 2026-04-11 00:34:17.187859 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-11 00:34:17.187874 | orchestrator | Saturday 11 April 2026 00:34:04 +0000 (0:00:00.832) 0:07:39.617 ******** 2026-04-11 00:34:17.187889 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 00:34:17.187929 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 00:34:17.187944 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 00:34:17.187958 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 00:34:17.187967 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 00:34:17.187976 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 00:34:17.187985 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 00:34:17.187994 | orchestrator | 2026-04-11 00:34:17.188002 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-11 00:34:17.188011 | orchestrator | Saturday 11 April 2026 00:34:06 +0000 (0:00:01.718) 0:07:41.336 ******** 2026-04-11 00:34:17.188020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:34:17.188029 | orchestrator | 2026-04-11 00:34:17.188038 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-11 00:34:17.188047 | orchestrator | Saturday 11 April 2026 00:34:07 +0000 (0:00:00.936) 0:07:42.273 ******** 2026-04-11 00:34:17.188056 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:17.188065 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:17.188073 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:17.188082 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:17.188091 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:17.188099 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:17.188108 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:17.188116 | orchestrator | 2026-04-11 00:34:17.188137 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-11 00:34:47.152215 | orchestrator | Saturday 11 April 2026 00:34:17 +0000 (0:00:09.694) 0:07:51.968 ******** 2026-04-11 00:34:47.152367 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:47.152385 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:47.152397 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:47.152408 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:47.152419 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:47.152430 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:47.152441 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:47.152452 | orchestrator | 2026-04-11 00:34:47.152464 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-11 00:34:47.152505 | orchestrator | Saturday 11 April 2026 00:34:18 +0000 (0:00:01.766) 0:07:53.734 ******** 2026-04-11 00:34:47.152517 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:47.152528 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:47.152539 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:47.152549 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:47.152560 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:47.152570 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:47.152581 | orchestrator | 2026-04-11 00:34:47.152592 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-11 00:34:47.152603 | orchestrator | Saturday 11 April 2026 00:34:20 +0000 (0:00:01.595) 0:07:55.330 ******** 2026-04-11 00:34:47.152614 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.152627 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.152637 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.152648 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.152659 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.152670 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.152680 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.152691 | orchestrator | 2026-04-11 00:34:47.152702 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-11 00:34:47.152713 | orchestrator | 2026-04-11 00:34:47.152726 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-11 00:34:47.152738 | orchestrator | Saturday 11 April 2026 00:34:21 +0000 (0:00:01.214) 0:07:56.544 ******** 2026-04-11 00:34:47.152750 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:34:47.152763 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:34:47.152775 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:34:47.152788 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:34:47.152799 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:34:47.152835 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:34:47.152846 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:34:47.152857 | orchestrator | 2026-04-11 00:34:47.152868 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-11 00:34:47.152879 | orchestrator | 2026-04-11 00:34:47.152890 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-11 00:34:47.152900 | orchestrator | Saturday 11 April 2026 00:34:22 +0000 (0:00:00.475) 0:07:57.019 ******** 2026-04-11 00:34:47.152911 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.152941 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.152952 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.152962 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.152973 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.152983 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.152994 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.153004 | orchestrator | 2026-04-11 00:34:47.153015 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-11 00:34:47.153039 | orchestrator | Saturday 11 April 2026 00:34:23 +0000 (0:00:01.348) 0:07:58.368 ******** 2026-04-11 00:34:47.153050 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:47.153071 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:47.153082 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:47.153092 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:47.153103 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:47.153114 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:47.153125 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:47.153135 | orchestrator | 2026-04-11 00:34:47.153146 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-11 00:34:47.153157 | orchestrator | Saturday 11 April 2026 00:34:25 +0000 (0:00:01.603) 0:07:59.971 ******** 2026-04-11 00:34:47.153168 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:34:47.153178 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:34:47.153189 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:34:47.153200 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:34:47.153222 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:34:47.153233 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:34:47.153244 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:34:47.153254 | orchestrator | 2026-04-11 00:34:47.153265 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-11 00:34:47.153276 | orchestrator | Saturday 11 April 2026 00:34:25 +0000 (0:00:00.469) 0:08:00.441 ******** 2026-04-11 00:34:47.153288 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:34:47.153301 | orchestrator | 2026-04-11 00:34:47.153312 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-11 00:34:47.153323 | orchestrator | Saturday 11 April 2026 00:34:26 +0000 (0:00:00.813) 0:08:01.255 ******** 2026-04-11 00:34:47.153335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:34:47.153349 | orchestrator | 2026-04-11 00:34:47.153359 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-11 00:34:47.153370 | orchestrator | Saturday 11 April 2026 00:34:27 +0000 (0:00:00.893) 0:08:02.148 ******** 2026-04-11 00:34:47.153381 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.153392 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.153403 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.153413 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.153424 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.153435 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.153445 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.153456 | orchestrator | 2026-04-11 00:34:47.153490 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-11 00:34:47.153501 | orchestrator | Saturday 11 April 2026 00:34:36 +0000 (0:00:08.879) 0:08:11.028 ******** 2026-04-11 00:34:47.153512 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.153523 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.153534 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.153544 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.153555 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.153566 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.153576 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.153587 | orchestrator | 2026-04-11 00:34:47.153598 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-11 00:34:47.153609 | orchestrator | Saturday 11 April 2026 00:34:37 +0000 (0:00:00.848) 0:08:11.876 ******** 2026-04-11 00:34:47.153620 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.153630 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.153641 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.153652 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.153662 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.153673 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.153684 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.153695 | orchestrator | 2026-04-11 00:34:47.153706 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-11 00:34:47.153717 | orchestrator | Saturday 11 April 2026 00:34:38 +0000 (0:00:01.319) 0:08:13.195 ******** 2026-04-11 00:34:47.153727 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.153738 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.153749 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.153760 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.153770 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.153781 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.153791 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.153828 | orchestrator | 2026-04-11 00:34:47.153840 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-11 00:34:47.153859 | orchestrator | Saturday 11 April 2026 00:34:40 +0000 (0:00:01.884) 0:08:15.080 ******** 2026-04-11 00:34:47.153870 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.153881 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.153892 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.153902 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.153913 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.153923 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.153934 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.153945 | orchestrator | 2026-04-11 00:34:47.153955 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-11 00:34:47.153966 | orchestrator | Saturday 11 April 2026 00:34:41 +0000 (0:00:01.236) 0:08:16.316 ******** 2026-04-11 00:34:47.153977 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.153995 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.154006 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.154113 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.154128 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.154139 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.154149 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.154160 | orchestrator | 2026-04-11 00:34:47.154171 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-11 00:34:47.154182 | orchestrator | 2026-04-11 00:34:47.154193 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-11 00:34:47.154204 | orchestrator | Saturday 11 April 2026 00:34:42 +0000 (0:00:01.084) 0:08:17.400 ******** 2026-04-11 00:34:47.154215 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:34:47.154225 | orchestrator | 2026-04-11 00:34:47.154236 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-11 00:34:47.154247 | orchestrator | Saturday 11 April 2026 00:34:43 +0000 (0:00:00.877) 0:08:18.278 ******** 2026-04-11 00:34:47.154258 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:47.154268 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:47.154279 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:47.154290 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:47.154301 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:47.154312 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:47.154323 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:47.154333 | orchestrator | 2026-04-11 00:34:47.154344 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-11 00:34:47.154355 | orchestrator | Saturday 11 April 2026 00:34:44 +0000 (0:00:00.796) 0:08:19.074 ******** 2026-04-11 00:34:47.154366 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:47.154377 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:47.154388 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:47.154398 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:47.154409 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:47.154419 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:47.154430 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:47.154441 | orchestrator | 2026-04-11 00:34:47.154452 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-11 00:34:47.154462 | orchestrator | Saturday 11 April 2026 00:34:45 +0000 (0:00:01.226) 0:08:20.301 ******** 2026-04-11 00:34:47.154473 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:34:47.154484 | orchestrator | 2026-04-11 00:34:47.154495 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-11 00:34:47.154506 | orchestrator | Saturday 11 April 2026 00:34:46 +0000 (0:00:00.805) 0:08:21.106 ******** 2026-04-11 00:34:47.154516 | orchestrator | ok: [testbed-manager] 2026-04-11 00:34:47.154540 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:34:47.154551 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:34:47.154562 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:34:47.154573 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:34:47.154583 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:34:47.154594 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:34:47.154605 | orchestrator | 2026-04-11 00:34:47.154625 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-11 00:34:48.661682 | orchestrator | Saturday 11 April 2026 00:34:47 +0000 (0:00:00.827) 0:08:21.934 ******** 2026-04-11 00:34:48.661874 | orchestrator | changed: [testbed-manager] 2026-04-11 00:34:48.661896 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:34:48.661909 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:34:48.661921 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:34:48.661932 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:34:48.661942 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:34:48.661952 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:34:48.661963 | orchestrator | 2026-04-11 00:34:48.661976 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:34:48.661987 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-11 00:34:48.662000 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-11 00:34:48.662011 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-11 00:34:48.662098 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-11 00:34:48.662110 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-11 00:34:48.662120 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-11 00:34:48.662130 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-11 00:34:48.662140 | orchestrator | 2026-04-11 00:34:48.662151 | orchestrator | 2026-04-11 00:34:48.662161 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:34:48.662172 | orchestrator | Saturday 11 April 2026 00:34:48 +0000 (0:00:01.218) 0:08:23.152 ******** 2026-04-11 00:34:48.662182 | orchestrator | =============================================================================== 2026-04-11 00:34:48.662193 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.56s 2026-04-11 00:34:48.662232 | orchestrator | osism.commons.packages : Download required packages -------------------- 43.37s 2026-04-11 00:34:48.662245 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 31.94s 2026-04-11 00:34:48.662256 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.43s 2026-04-11 00:34:48.662267 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.43s 2026-04-11 00:34:48.662279 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.25s 2026-04-11 00:34:48.662292 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.31s 2026-04-11 00:34:48.662303 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.95s 2026-04-11 00:34:48.662314 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.81s 2026-04-11 00:34:48.662324 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.69s 2026-04-11 00:34:48.662336 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.28s 2026-04-11 00:34:48.662379 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.96s 2026-04-11 00:34:48.662391 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 8.89s 2026-04-11 00:34:48.662402 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.89s 2026-04-11 00:34:48.662413 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.88s 2026-04-11 00:34:48.662424 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.69s 2026-04-11 00:34:48.662435 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.21s 2026-04-11 00:34:48.662444 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.61s 2026-04-11 00:34:48.662455 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.36s 2026-04-11 00:34:48.662465 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.99s 2026-04-11 00:34:48.903770 | orchestrator | + osism apply fail2ban 2026-04-11 00:35:00.650828 | orchestrator | 2026-04-11 00:35:00 | INFO  | Prepare task for execution of fail2ban. 2026-04-11 00:35:00.728109 | orchestrator | 2026-04-11 00:35:00 | INFO  | Task e4429ba3-838c-4179-8a34-1b98899b2d28 (fail2ban) was prepared for execution. 2026-04-11 00:35:00.728210 | orchestrator | 2026-04-11 00:35:00 | INFO  | It takes a moment until task e4429ba3-838c-4179-8a34-1b98899b2d28 (fail2ban) has been started and output is visible here. 2026-04-11 00:35:20.701507 | orchestrator | 2026-04-11 00:35:20.701665 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-11 00:35:20.701686 | orchestrator | 2026-04-11 00:35:20.701700 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-11 00:35:20.701711 | orchestrator | Saturday 11 April 2026 00:35:04 +0000 (0:00:00.332) 0:00:00.332 ******** 2026-04-11 00:35:20.701724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:35:20.701737 | orchestrator | 2026-04-11 00:35:20.701748 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-11 00:35:20.701759 | orchestrator | Saturday 11 April 2026 00:35:05 +0000 (0:00:01.114) 0:00:01.447 ******** 2026-04-11 00:35:20.701771 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:35:20.701784 | orchestrator | changed: [testbed-manager] 2026-04-11 00:35:20.701795 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:35:20.701805 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:35:20.701816 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:35:20.701827 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:35:20.701838 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:35:20.701849 | orchestrator | 2026-04-11 00:35:20.701860 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-11 00:35:20.701870 | orchestrator | Saturday 11 April 2026 00:35:16 +0000 (0:00:11.113) 0:00:12.561 ******** 2026-04-11 00:35:20.701881 | orchestrator | changed: [testbed-manager] 2026-04-11 00:35:20.701892 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:35:20.701903 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:35:20.701913 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:35:20.701924 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:35:20.701935 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:35:20.701946 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:35:20.701956 | orchestrator | 2026-04-11 00:35:20.701967 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-11 00:35:20.701978 | orchestrator | Saturday 11 April 2026 00:35:17 +0000 (0:00:01.457) 0:00:14.018 ******** 2026-04-11 00:35:20.701989 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:20.702001 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:35:20.702012 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:35:20.702131 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:35:20.702145 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:35:20.702158 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:35:20.702170 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:35:20.702182 | orchestrator | 2026-04-11 00:35:20.702195 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-11 00:35:20.702207 | orchestrator | Saturday 11 April 2026 00:35:19 +0000 (0:00:01.175) 0:00:15.194 ******** 2026-04-11 00:35:20.702220 | orchestrator | changed: [testbed-manager] 2026-04-11 00:35:20.702232 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:35:20.702244 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:35:20.702257 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:35:20.702270 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:35:20.702282 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:35:20.702294 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:35:20.702305 | orchestrator | 2026-04-11 00:35:20.702318 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:35:20.702331 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:35:20.702345 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:35:20.702358 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:35:20.702371 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:35:20.702384 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:35:20.702396 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:35:20.702408 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:35:20.702419 | orchestrator | 2026-04-11 00:35:20.702430 | orchestrator | 2026-04-11 00:35:20.702441 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:35:20.702452 | orchestrator | Saturday 11 April 2026 00:35:20 +0000 (0:00:01.483) 0:00:16.678 ******** 2026-04-11 00:35:20.702462 | orchestrator | =============================================================================== 2026-04-11 00:35:20.702473 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.11s 2026-04-11 00:35:20.702502 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.48s 2026-04-11 00:35:20.702514 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.46s 2026-04-11 00:35:20.702524 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.18s 2026-04-11 00:35:20.702535 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.11s 2026-04-11 00:35:20.822009 | orchestrator | + osism apply network 2026-04-11 00:35:32.055442 | orchestrator | 2026-04-11 00:35:32 | INFO  | Prepare task for execution of network. 2026-04-11 00:35:32.129535 | orchestrator | 2026-04-11 00:35:32 | INFO  | Task 8f19f29c-90c9-4621-ab8a-4b19d601195f (network) was prepared for execution. 2026-04-11 00:35:32.129689 | orchestrator | 2026-04-11 00:35:32 | INFO  | It takes a moment until task 8f19f29c-90c9-4621-ab8a-4b19d601195f (network) has been started and output is visible here. 2026-04-11 00:35:57.901003 | orchestrator | 2026-04-11 00:35:57.901122 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-11 00:35:57.901147 | orchestrator | 2026-04-11 00:35:57.901166 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-11 00:35:57.901216 | orchestrator | Saturday 11 April 2026 00:35:35 +0000 (0:00:00.300) 0:00:00.300 ******** 2026-04-11 00:35:57.901237 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:57.901257 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:35:57.901276 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:35:57.901296 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:35:57.901313 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:35:57.901324 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:35:57.901334 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:35:57.901345 | orchestrator | 2026-04-11 00:35:57.901356 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-11 00:35:57.901367 | orchestrator | Saturday 11 April 2026 00:35:35 +0000 (0:00:00.562) 0:00:00.863 ******** 2026-04-11 00:35:57.901379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:35:57.901392 | orchestrator | 2026-04-11 00:35:57.901403 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-11 00:35:57.901414 | orchestrator | Saturday 11 April 2026 00:35:36 +0000 (0:00:01.052) 0:00:01.915 ******** 2026-04-11 00:35:57.901425 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:35:57.901436 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:35:57.901446 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:35:57.901457 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:35:57.901468 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:35:57.901479 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:57.901491 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:35:57.901501 | orchestrator | 2026-04-11 00:35:57.901512 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-11 00:35:57.901523 | orchestrator | Saturday 11 April 2026 00:35:39 +0000 (0:00:02.612) 0:00:04.528 ******** 2026-04-11 00:35:57.901592 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:57.901608 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:35:57.901640 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:35:57.901672 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:35:57.901690 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:35:57.901708 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:35:57.901726 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:35:57.901744 | orchestrator | 2026-04-11 00:35:57.901763 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-11 00:35:57.901782 | orchestrator | Saturday 11 April 2026 00:35:40 +0000 (0:00:01.478) 0:00:06.007 ******** 2026-04-11 00:35:57.901822 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-11 00:35:57.901843 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-11 00:35:57.901861 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-11 00:35:57.901879 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-11 00:35:57.901896 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-11 00:35:57.901913 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-11 00:35:57.901925 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-11 00:35:57.901935 | orchestrator | 2026-04-11 00:35:57.901947 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-11 00:35:57.901958 | orchestrator | Saturday 11 April 2026 00:35:42 +0000 (0:00:01.037) 0:00:07.044 ******** 2026-04-11 00:35:57.901969 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 00:35:57.901981 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 00:35:57.901992 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 00:35:57.902003 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 00:35:57.902014 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 00:35:57.902189 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 00:35:57.902201 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 00:35:57.902212 | orchestrator | 2026-04-11 00:35:57.902238 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-11 00:35:57.902249 | orchestrator | Saturday 11 April 2026 00:35:45 +0000 (0:00:02.968) 0:00:10.013 ******** 2026-04-11 00:35:57.902260 | orchestrator | changed: [testbed-manager] 2026-04-11 00:35:57.902270 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:35:57.902281 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:35:57.902292 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:35:57.902303 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:35:57.902313 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:35:57.902324 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:35:57.902335 | orchestrator | 2026-04-11 00:35:57.902346 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-11 00:35:57.902356 | orchestrator | Saturday 11 April 2026 00:35:46 +0000 (0:00:01.539) 0:00:11.553 ******** 2026-04-11 00:35:57.902367 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 00:35:57.902377 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 00:35:57.902388 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 00:35:57.902399 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 00:35:57.902409 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 00:35:57.902420 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 00:35:57.902430 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 00:35:57.902441 | orchestrator | 2026-04-11 00:35:57.902452 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-11 00:35:57.902463 | orchestrator | Saturday 11 April 2026 00:35:48 +0000 (0:00:01.861) 0:00:13.414 ******** 2026-04-11 00:35:57.902473 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:57.902484 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:35:57.902495 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:35:57.902505 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:35:57.902516 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:35:57.902526 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:35:57.902579 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:35:57.902592 | orchestrator | 2026-04-11 00:35:57.902602 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-11 00:35:57.902637 | orchestrator | Saturday 11 April 2026 00:35:49 +0000 (0:00:00.933) 0:00:14.348 ******** 2026-04-11 00:35:57.902649 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:35:57.902660 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:35:57.902670 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:35:57.902681 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:35:57.902692 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:35:57.902702 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:35:57.902713 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:35:57.902724 | orchestrator | 2026-04-11 00:35:57.902736 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-11 00:35:57.902747 | orchestrator | Saturday 11 April 2026 00:35:50 +0000 (0:00:00.664) 0:00:15.012 ******** 2026-04-11 00:35:57.902757 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:57.902768 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:35:57.902779 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:35:57.902790 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:35:57.902801 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:35:57.902812 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:35:57.902822 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:35:57.902833 | orchestrator | 2026-04-11 00:35:57.902844 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-11 00:35:57.902855 | orchestrator | Saturday 11 April 2026 00:35:51 +0000 (0:00:01.909) 0:00:16.922 ******** 2026-04-11 00:35:57.902866 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:35:57.902877 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:35:57.902888 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:35:57.902899 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:35:57.902910 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:35:57.902929 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:35:57.902941 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-11 00:35:57.902954 | orchestrator | 2026-04-11 00:35:57.902965 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-11 00:35:57.902976 | orchestrator | Saturday 11 April 2026 00:35:52 +0000 (0:00:00.785) 0:00:17.708 ******** 2026-04-11 00:35:57.902986 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:57.902997 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:35:57.903008 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:35:57.903019 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:35:57.903029 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:35:57.903040 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:35:57.903051 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:35:57.903062 | orchestrator | 2026-04-11 00:35:57.903073 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-11 00:35:57.903083 | orchestrator | Saturday 11 April 2026 00:35:54 +0000 (0:00:01.497) 0:00:19.206 ******** 2026-04-11 00:35:57.903103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:35:57.903116 | orchestrator | 2026-04-11 00:35:57.903127 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-11 00:35:57.903138 | orchestrator | Saturday 11 April 2026 00:35:55 +0000 (0:00:01.071) 0:00:20.277 ******** 2026-04-11 00:35:57.903149 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:57.903159 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:35:57.903170 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:35:57.903181 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:35:57.903192 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:35:57.903203 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:35:57.903214 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:35:57.903225 | orchestrator | 2026-04-11 00:35:57.903236 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-11 00:35:57.903247 | orchestrator | Saturday 11 April 2026 00:35:56 +0000 (0:00:01.037) 0:00:21.315 ******** 2026-04-11 00:35:57.903257 | orchestrator | ok: [testbed-manager] 2026-04-11 00:35:57.903268 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:35:57.903279 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:35:57.903290 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:35:57.903300 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:35:57.903311 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:35:57.903322 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:35:57.903333 | orchestrator | 2026-04-11 00:35:57.903344 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-11 00:35:57.903354 | orchestrator | Saturday 11 April 2026 00:35:56 +0000 (0:00:00.642) 0:00:21.957 ******** 2026-04-11 00:35:57.903366 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 00:35:57.903376 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 00:35:57.903387 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 00:35:57.903398 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 00:35:57.903409 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 00:35:57.903420 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 00:35:57.903430 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 00:35:57.903441 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 00:35:57.903452 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 00:35:57.903463 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 00:35:57.903481 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 00:35:57.903497 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 00:35:57.903516 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 00:35:57.903534 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 00:35:57.903624 | orchestrator | 2026-04-11 00:35:57.903654 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-11 00:36:12.349236 | orchestrator | Saturday 11 April 2026 00:35:57 +0000 (0:00:00.948) 0:00:22.906 ******** 2026-04-11 00:36:12.349381 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:36:12.349401 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:36:12.349413 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:36:12.349425 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:36:12.349435 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:36:12.349446 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:36:12.349457 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:36:12.349469 | orchestrator | 2026-04-11 00:36:12.349481 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-11 00:36:12.349493 | orchestrator | Saturday 11 April 2026 00:35:58 +0000 (0:00:00.662) 0:00:23.568 ******** 2026-04-11 00:36:12.349580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-2, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:36:12.349598 | orchestrator | 2026-04-11 00:36:12.349610 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-11 00:36:12.349621 | orchestrator | Saturday 11 April 2026 00:36:02 +0000 (0:00:04.012) 0:00:27.580 ******** 2026-04-11 00:36:12.349634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349646 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-11 00:36:12.349659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349733 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-11 00:36:12.349774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-11 00:36:12.349787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-11 00:36:12.349798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-11 00:36:12.349829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-11 00:36:12.349841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-11 00:36:12.349852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-11 00:36:12.349863 | orchestrator | 2026-04-11 00:36:12.349874 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-11 00:36:12.349885 | orchestrator | Saturday 11 April 2026 00:36:07 +0000 (0:00:04.876) 0:00:32.457 ******** 2026-04-11 00:36:12.349897 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-11 00:36:12.349908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349957 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-11 00:36:12.349968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.349987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-11 00:36:12.349999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-11 00:36:12.350010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-11 00:36:12.350084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-11 00:36:12.350096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-11 00:36:12.350119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-11 00:36:24.525317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-11 00:36:24.525450 | orchestrator | 2026-04-11 00:36:24.525538 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-11 00:36:24.525561 | orchestrator | Saturday 11 April 2026 00:36:12 +0000 (0:00:05.233) 0:00:37.690 ******** 2026-04-11 00:36:24.525580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:36:24.525599 | orchestrator | 2026-04-11 00:36:24.525618 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-11 00:36:24.525637 | orchestrator | Saturday 11 April 2026 00:36:13 +0000 (0:00:01.060) 0:00:38.751 ******** 2026-04-11 00:36:24.525657 | orchestrator | ok: [testbed-manager] 2026-04-11 00:36:24.525677 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:36:24.525696 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:36:24.525715 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:36:24.525735 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:36:24.525754 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:36:24.525773 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:36:24.525794 | orchestrator | 2026-04-11 00:36:24.525815 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-11 00:36:24.525835 | orchestrator | Saturday 11 April 2026 00:36:14 +0000 (0:00:00.965) 0:00:39.716 ******** 2026-04-11 00:36:24.525857 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 00:36:24.525881 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 00:36:24.525908 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 00:36:24.525959 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 00:36:24.525994 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 00:36:24.526085 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 00:36:24.526111 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 00:36:24.526131 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 00:36:24.526150 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:36:24.526170 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 00:36:24.526190 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 00:36:24.526207 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 00:36:24.526229 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 00:36:24.526249 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:36:24.526267 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 00:36:24.526287 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 00:36:24.526306 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 00:36:24.526324 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 00:36:24.526345 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:36:24.526364 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 00:36:24.526383 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 00:36:24.526402 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 00:36:24.526421 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 00:36:24.526439 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:36:24.526458 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 00:36:24.526589 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 00:36:24.526613 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 00:36:24.526632 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 00:36:24.526651 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:36:24.526669 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:36:24.526688 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 00:36:24.526708 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 00:36:24.526728 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 00:36:24.526747 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 00:36:24.526765 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:36:24.526783 | orchestrator | 2026-04-11 00:36:24.526801 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-11 00:36:24.526844 | orchestrator | Saturday 11 April 2026 00:36:15 +0000 (0:00:00.659) 0:00:40.376 ******** 2026-04-11 00:36:24.526863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:36:24.526883 | orchestrator | 2026-04-11 00:36:24.526903 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-11 00:36:24.526940 | orchestrator | Saturday 11 April 2026 00:36:16 +0000 (0:00:01.068) 0:00:41.444 ******** 2026-04-11 00:36:24.526960 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:36:24.526980 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:36:24.526999 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:36:24.527017 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:36:24.527035 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:36:24.527053 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:36:24.527071 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:36:24.527087 | orchestrator | 2026-04-11 00:36:24.527101 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-11 00:36:24.527116 | orchestrator | Saturday 11 April 2026 00:36:17 +0000 (0:00:00.645) 0:00:42.090 ******** 2026-04-11 00:36:24.527132 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:36:24.527148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:36:24.527164 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:36:24.527178 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:36:24.527192 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:36:24.527207 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:36:24.527221 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:36:24.527238 | orchestrator | 2026-04-11 00:36:24.527254 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-11 00:36:24.527272 | orchestrator | Saturday 11 April 2026 00:36:17 +0000 (0:00:00.545) 0:00:42.636 ******** 2026-04-11 00:36:24.527289 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:36:24.527305 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:36:24.527322 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:36:24.527334 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:36:24.527344 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:36:24.527353 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:36:24.527363 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:36:24.527373 | orchestrator | 2026-04-11 00:36:24.527382 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-11 00:36:24.527393 | orchestrator | Saturday 11 April 2026 00:36:18 +0000 (0:00:00.654) 0:00:43.290 ******** 2026-04-11 00:36:24.527402 | orchestrator | ok: [testbed-manager] 2026-04-11 00:36:24.527412 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:36:24.527422 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:36:24.527431 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:36:24.527441 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:36:24.527451 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:36:24.527460 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:36:24.527470 | orchestrator | 2026-04-11 00:36:24.527510 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-11 00:36:24.527525 | orchestrator | Saturday 11 April 2026 00:36:19 +0000 (0:00:01.591) 0:00:44.882 ******** 2026-04-11 00:36:24.527535 | orchestrator | ok: [testbed-manager] 2026-04-11 00:36:24.527545 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:36:24.527554 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:36:24.527564 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:36:24.527573 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:36:24.527583 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:36:24.527593 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:36:24.527602 | orchestrator | 2026-04-11 00:36:24.527612 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-11 00:36:24.527622 | orchestrator | Saturday 11 April 2026 00:36:21 +0000 (0:00:01.226) 0:00:46.108 ******** 2026-04-11 00:36:24.527631 | orchestrator | ok: [testbed-manager] 2026-04-11 00:36:24.527641 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:36:24.527651 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:36:24.527664 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:36:24.527675 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:36:24.527684 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:36:24.527694 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:36:24.527713 | orchestrator | 2026-04-11 00:36:24.527724 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-11 00:36:24.527733 | orchestrator | Saturday 11 April 2026 00:36:23 +0000 (0:00:02.106) 0:00:48.215 ******** 2026-04-11 00:36:24.527743 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:36:24.527753 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:36:24.527762 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:36:24.527772 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:36:24.527781 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:36:24.527791 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:36:24.527800 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:36:24.527810 | orchestrator | 2026-04-11 00:36:24.527820 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-11 00:36:24.527830 | orchestrator | Saturday 11 April 2026 00:36:23 +0000 (0:00:00.609) 0:00:48.824 ******** 2026-04-11 00:36:24.527839 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:36:24.527849 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:36:24.527858 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:36:24.527868 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:36:24.527877 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:36:24.527887 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:36:24.527896 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:36:24.527906 | orchestrator | 2026-04-11 00:36:24.527916 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:36:24.527927 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-11 00:36:24.527941 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 00:36:24.527971 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 00:36:24.763641 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 00:36:24.763731 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 00:36:24.763745 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 00:36:24.763757 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 00:36:24.763769 | orchestrator | 2026-04-11 00:36:24.763782 | orchestrator | 2026-04-11 00:36:24.763794 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:36:24.763806 | orchestrator | Saturday 11 April 2026 00:36:24 +0000 (0:00:00.701) 0:00:49.525 ******** 2026-04-11 00:36:24.763817 | orchestrator | =============================================================================== 2026-04-11 00:36:24.763827 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.23s 2026-04-11 00:36:24.763838 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.88s 2026-04-11 00:36:24.763849 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.01s 2026-04-11 00:36:24.763878 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.97s 2026-04-11 00:36:24.763890 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.61s 2026-04-11 00:36:24.763901 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.11s 2026-04-11 00:36:24.763911 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.91s 2026-04-11 00:36:24.763922 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.86s 2026-04-11 00:36:24.763953 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.59s 2026-04-11 00:36:24.763968 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.54s 2026-04-11 00:36:24.763979 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.50s 2026-04-11 00:36:24.763990 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.48s 2026-04-11 00:36:24.764001 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.23s 2026-04-11 00:36:24.764012 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.07s 2026-04-11 00:36:24.764022 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.07s 2026-04-11 00:36:24.764033 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.06s 2026-04-11 00:36:24.764044 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.05s 2026-04-11 00:36:24.764054 | orchestrator | osism.commons.network : Create required directories --------------------- 1.04s 2026-04-11 00:36:24.764065 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.04s 2026-04-11 00:36:24.764076 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2026-04-11 00:36:24.939968 | orchestrator | + osism apply wireguard 2026-04-11 00:36:36.193016 | orchestrator | 2026-04-11 00:36:36 | INFO  | Prepare task for execution of wireguard. 2026-04-11 00:36:36.268952 | orchestrator | 2026-04-11 00:36:36 | INFO  | Task 617b4023-61ea-4e45-adbe-c28f1287cac7 (wireguard) was prepared for execution. 2026-04-11 00:36:36.269064 | orchestrator | 2026-04-11 00:36:36 | INFO  | It takes a moment until task 617b4023-61ea-4e45-adbe-c28f1287cac7 (wireguard) has been started and output is visible here. 2026-04-11 00:36:54.260121 | orchestrator | 2026-04-11 00:36:54.260244 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-11 00:36:54.260267 | orchestrator | 2026-04-11 00:36:54.260283 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-11 00:36:54.260298 | orchestrator | Saturday 11 April 2026 00:36:39 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-04-11 00:36:54.260311 | orchestrator | ok: [testbed-manager] 2026-04-11 00:36:54.260326 | orchestrator | 2026-04-11 00:36:54.260340 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-11 00:36:54.260354 | orchestrator | Saturday 11 April 2026 00:36:41 +0000 (0:00:01.548) 0:00:01.832 ******** 2026-04-11 00:36:54.260368 | orchestrator | changed: [testbed-manager] 2026-04-11 00:36:54.260383 | orchestrator | 2026-04-11 00:36:54.260397 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-11 00:36:54.260471 | orchestrator | Saturday 11 April 2026 00:36:46 +0000 (0:00:05.686) 0:00:07.519 ******** 2026-04-11 00:36:54.260486 | orchestrator | changed: [testbed-manager] 2026-04-11 00:36:54.260500 | orchestrator | 2026-04-11 00:36:54.260514 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-11 00:36:54.260527 | orchestrator | Saturday 11 April 2026 00:36:47 +0000 (0:00:00.534) 0:00:08.054 ******** 2026-04-11 00:36:54.260541 | orchestrator | changed: [testbed-manager] 2026-04-11 00:36:54.260554 | orchestrator | 2026-04-11 00:36:54.260569 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-11 00:36:54.260583 | orchestrator | Saturday 11 April 2026 00:36:47 +0000 (0:00:00.426) 0:00:08.481 ******** 2026-04-11 00:36:54.260596 | orchestrator | ok: [testbed-manager] 2026-04-11 00:36:54.260610 | orchestrator | 2026-04-11 00:36:54.260623 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-11 00:36:54.260637 | orchestrator | Saturday 11 April 2026 00:36:48 +0000 (0:00:00.526) 0:00:09.007 ******** 2026-04-11 00:36:54.260650 | orchestrator | ok: [testbed-manager] 2026-04-11 00:36:54.260664 | orchestrator | 2026-04-11 00:36:54.260677 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-11 00:36:54.260722 | orchestrator | Saturday 11 April 2026 00:36:48 +0000 (0:00:00.404) 0:00:09.411 ******** 2026-04-11 00:36:54.260737 | orchestrator | ok: [testbed-manager] 2026-04-11 00:36:54.260749 | orchestrator | 2026-04-11 00:36:54.260763 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-11 00:36:54.260777 | orchestrator | Saturday 11 April 2026 00:36:49 +0000 (0:00:00.417) 0:00:09.829 ******** 2026-04-11 00:36:54.260790 | orchestrator | changed: [testbed-manager] 2026-04-11 00:36:54.260804 | orchestrator | 2026-04-11 00:36:54.260817 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-11 00:36:54.260831 | orchestrator | Saturday 11 April 2026 00:36:50 +0000 (0:00:01.156) 0:00:10.986 ******** 2026-04-11 00:36:54.260845 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 00:36:54.260858 | orchestrator | changed: [testbed-manager] 2026-04-11 00:36:54.260871 | orchestrator | 2026-04-11 00:36:54.260884 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-11 00:36:54.260897 | orchestrator | Saturday 11 April 2026 00:36:51 +0000 (0:00:00.987) 0:00:11.973 ******** 2026-04-11 00:36:54.260910 | orchestrator | changed: [testbed-manager] 2026-04-11 00:36:54.260923 | orchestrator | 2026-04-11 00:36:54.260936 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-11 00:36:54.260950 | orchestrator | Saturday 11 April 2026 00:36:53 +0000 (0:00:01.966) 0:00:13.939 ******** 2026-04-11 00:36:54.260964 | orchestrator | changed: [testbed-manager] 2026-04-11 00:36:54.260977 | orchestrator | 2026-04-11 00:36:54.260991 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:36:54.261004 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:36:54.261019 | orchestrator | 2026-04-11 00:36:54.261034 | orchestrator | 2026-04-11 00:36:54.261047 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:36:54.261061 | orchestrator | Saturday 11 April 2026 00:36:54 +0000 (0:00:00.888) 0:00:14.828 ******** 2026-04-11 00:36:54.261092 | orchestrator | =============================================================================== 2026-04-11 00:36:54.261107 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.69s 2026-04-11 00:36:54.261122 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.97s 2026-04-11 00:36:54.261134 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2026-04-11 00:36:54.261147 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2026-04-11 00:36:54.261160 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-04-11 00:36:54.261173 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.89s 2026-04-11 00:36:54.261186 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-04-11 00:36:54.261199 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2026-04-11 00:36:54.261213 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-04-11 00:36:54.261228 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-04-11 00:36:54.261243 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-04-11 00:36:54.439056 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-11 00:36:54.473213 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-11 00:36:54.473334 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-11 00:36:54.551262 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 178 0 --:--:-- --:--:-- --:--:-- 179 2026-04-11 00:36:54.565267 | orchestrator | + osism apply --environment custom workarounds 2026-04-11 00:36:55.812968 | orchestrator | 2026-04-11 00:36:55 | INFO  | Trying to run play workarounds in environment custom 2026-04-11 00:37:06.000360 | orchestrator | 2026-04-11 00:37:05 | INFO  | Prepare task for execution of workarounds. 2026-04-11 00:37:06.083810 | orchestrator | 2026-04-11 00:37:06 | INFO  | Task 649e079e-60b4-4a8e-b940-13ded548de63 (workarounds) was prepared for execution. 2026-04-11 00:37:06.083904 | orchestrator | 2026-04-11 00:37:06 | INFO  | It takes a moment until task 649e079e-60b4-4a8e-b940-13ded548de63 (workarounds) has been started and output is visible here. 2026-04-11 00:37:30.127970 | orchestrator | 2026-04-11 00:37:30.128106 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:37:30.128133 | orchestrator | 2026-04-11 00:37:30.128151 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-11 00:37:30.128167 | orchestrator | Saturday 11 April 2026 00:37:09 +0000 (0:00:00.179) 0:00:00.179 ******** 2026-04-11 00:37:30.128184 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-11 00:37:30.128200 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-11 00:37:30.128216 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-11 00:37:30.128232 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-11 00:37:30.128247 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-11 00:37:30.128263 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-11 00:37:30.128279 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-11 00:37:30.128295 | orchestrator | 2026-04-11 00:37:30.128312 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-11 00:37:30.128420 | orchestrator | 2026-04-11 00:37:30.128445 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-11 00:37:30.128462 | orchestrator | Saturday 11 April 2026 00:37:09 +0000 (0:00:00.594) 0:00:00.774 ******** 2026-04-11 00:37:30.128479 | orchestrator | ok: [testbed-manager] 2026-04-11 00:37:30.128500 | orchestrator | 2026-04-11 00:37:30.128518 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-11 00:37:30.128537 | orchestrator | 2026-04-11 00:37:30.128557 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-11 00:37:30.128577 | orchestrator | Saturday 11 April 2026 00:37:12 +0000 (0:00:02.297) 0:00:03.071 ******** 2026-04-11 00:37:30.128596 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:37:30.128617 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:37:30.128635 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:37:30.128655 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:37:30.128672 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:37:30.128692 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:37:30.128710 | orchestrator | 2026-04-11 00:37:30.128730 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-11 00:37:30.128746 | orchestrator | 2026-04-11 00:37:30.128763 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-11 00:37:30.128782 | orchestrator | Saturday 11 April 2026 00:37:14 +0000 (0:00:02.286) 0:00:05.357 ******** 2026-04-11 00:37:30.128801 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 00:37:30.128819 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 00:37:30.128837 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 00:37:30.128853 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 00:37:30.128890 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 00:37:30.128908 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 00:37:30.128952 | orchestrator | 2026-04-11 00:37:30.128968 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-11 00:37:30.128985 | orchestrator | Saturday 11 April 2026 00:37:15 +0000 (0:00:01.329) 0:00:06.686 ******** 2026-04-11 00:37:30.129001 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:37:30.129017 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:37:30.129033 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:37:30.129050 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:37:30.129066 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:37:30.129083 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:37:30.129099 | orchestrator | 2026-04-11 00:37:30.129115 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-11 00:37:30.129132 | orchestrator | Saturday 11 April 2026 00:37:19 +0000 (0:00:03.830) 0:00:10.517 ******** 2026-04-11 00:37:30.129148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:37:30.129164 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:37:30.129181 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:37:30.129197 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:37:30.129213 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:37:30.129229 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:37:30.129245 | orchestrator | 2026-04-11 00:37:30.129261 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-11 00:37:30.129277 | orchestrator | 2026-04-11 00:37:30.129293 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-11 00:37:30.129309 | orchestrator | Saturday 11 April 2026 00:37:20 +0000 (0:00:00.514) 0:00:11.032 ******** 2026-04-11 00:37:30.129325 | orchestrator | changed: [testbed-manager] 2026-04-11 00:37:30.129368 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:37:30.129385 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:37:30.129401 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:37:30.129417 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:37:30.129434 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:37:30.129450 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:37:30.129467 | orchestrator | 2026-04-11 00:37:30.129483 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-11 00:37:30.129501 | orchestrator | Saturday 11 April 2026 00:37:21 +0000 (0:00:01.760) 0:00:12.792 ******** 2026-04-11 00:37:30.129518 | orchestrator | changed: [testbed-manager] 2026-04-11 00:37:30.129535 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:37:30.129553 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:37:30.129569 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:37:30.129585 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:37:30.129602 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:37:30.129646 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:37:30.129663 | orchestrator | 2026-04-11 00:37:30.129681 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-11 00:37:30.129699 | orchestrator | Saturday 11 April 2026 00:37:23 +0000 (0:00:01.516) 0:00:14.308 ******** 2026-04-11 00:37:30.129716 | orchestrator | ok: [testbed-manager] 2026-04-11 00:37:30.129733 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:37:30.129750 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:37:30.129767 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:37:30.129783 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:37:30.129800 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:37:30.129817 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:37:30.129834 | orchestrator | 2026-04-11 00:37:30.129851 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-11 00:37:30.129868 | orchestrator | Saturday 11 April 2026 00:37:24 +0000 (0:00:01.585) 0:00:15.893 ******** 2026-04-11 00:37:30.129885 | orchestrator | changed: [testbed-manager] 2026-04-11 00:37:30.129901 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:37:30.129918 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:37:30.129935 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:37:30.129966 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:37:30.129983 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:37:30.129999 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:37:30.130084 | orchestrator | 2026-04-11 00:37:30.130107 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-11 00:37:30.130124 | orchestrator | Saturday 11 April 2026 00:37:26 +0000 (0:00:01.650) 0:00:17.544 ******** 2026-04-11 00:37:30.130142 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:37:30.130159 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:37:30.130176 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:37:30.130194 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:37:30.130212 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:37:30.130230 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:37:30.130249 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:37:30.130266 | orchestrator | 2026-04-11 00:37:30.130284 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-11 00:37:30.130302 | orchestrator | 2026-04-11 00:37:30.130320 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-11 00:37:30.130362 | orchestrator | Saturday 11 April 2026 00:37:27 +0000 (0:00:00.722) 0:00:18.266 ******** 2026-04-11 00:37:30.130379 | orchestrator | ok: [testbed-manager] 2026-04-11 00:37:30.130396 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:37:30.130412 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:37:30.130429 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:37:30.130446 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:37:30.130463 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:37:30.130479 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:37:30.130496 | orchestrator | 2026-04-11 00:37:30.130512 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:37:30.130531 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:37:30.130550 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:30.130576 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:30.130593 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:30.130610 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:30.130627 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:30.130643 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:30.130660 | orchestrator | 2026-04-11 00:37:30.130678 | orchestrator | 2026-04-11 00:37:30.130695 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:37:30.130712 | orchestrator | Saturday 11 April 2026 00:37:30 +0000 (0:00:02.850) 0:00:21.117 ******** 2026-04-11 00:37:30.130729 | orchestrator | =============================================================================== 2026-04-11 00:37:30.130746 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2026-04-11 00:37:30.130763 | orchestrator | Install python3-docker -------------------------------------------------- 2.85s 2026-04-11 00:37:30.130781 | orchestrator | Apply netplan configuration --------------------------------------------- 2.30s 2026-04-11 00:37:30.130798 | orchestrator | Apply netplan configuration --------------------------------------------- 2.29s 2026-04-11 00:37:30.130814 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.76s 2026-04-11 00:37:30.130843 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.65s 2026-04-11 00:37:30.130860 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2026-04-11 00:37:30.130877 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.52s 2026-04-11 00:37:30.130893 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.33s 2026-04-11 00:37:30.130909 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.72s 2026-04-11 00:37:30.130927 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.59s 2026-04-11 00:37:30.130958 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.51s 2026-04-11 00:37:30.570720 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-11 00:37:41.819905 | orchestrator | 2026-04-11 00:37:41 | INFO  | Prepare task for execution of reboot. 2026-04-11 00:37:41.887123 | orchestrator | 2026-04-11 00:37:41 | INFO  | Task 69748e30-3b43-4178-b041-1df62fa6c77e (reboot) was prepared for execution. 2026-04-11 00:37:41.887223 | orchestrator | 2026-04-11 00:37:41 | INFO  | It takes a moment until task 69748e30-3b43-4178-b041-1df62fa6c77e (reboot) has been started and output is visible here. 2026-04-11 00:37:52.348917 | orchestrator | 2026-04-11 00:37:52.349000 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 00:37:52.349008 | orchestrator | 2026-04-11 00:37:52.349013 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 00:37:52.349019 | orchestrator | Saturday 11 April 2026 00:37:44 +0000 (0:00:00.180) 0:00:00.180 ******** 2026-04-11 00:37:52.349024 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:37:52.349030 | orchestrator | 2026-04-11 00:37:52.349034 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 00:37:52.349039 | orchestrator | Saturday 11 April 2026 00:37:44 +0000 (0:00:00.103) 0:00:00.283 ******** 2026-04-11 00:37:52.349044 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:37:52.349049 | orchestrator | 2026-04-11 00:37:52.349053 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 00:37:52.349058 | orchestrator | Saturday 11 April 2026 00:37:46 +0000 (0:00:01.203) 0:00:01.487 ******** 2026-04-11 00:37:52.349063 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:37:52.349067 | orchestrator | 2026-04-11 00:37:52.349072 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 00:37:52.349078 | orchestrator | 2026-04-11 00:37:52.349085 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 00:37:52.349093 | orchestrator | Saturday 11 April 2026 00:37:46 +0000 (0:00:00.094) 0:00:01.582 ******** 2026-04-11 00:37:52.349101 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:37:52.349109 | orchestrator | 2026-04-11 00:37:52.349117 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 00:37:52.349122 | orchestrator | Saturday 11 April 2026 00:37:46 +0000 (0:00:00.088) 0:00:01.671 ******** 2026-04-11 00:37:52.349127 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:37:52.349131 | orchestrator | 2026-04-11 00:37:52.349136 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 00:37:52.349141 | orchestrator | Saturday 11 April 2026 00:37:47 +0000 (0:00:00.989) 0:00:02.661 ******** 2026-04-11 00:37:52.349145 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:37:52.349150 | orchestrator | 2026-04-11 00:37:52.349154 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 00:37:52.349159 | orchestrator | 2026-04-11 00:37:52.349164 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 00:37:52.349168 | orchestrator | Saturday 11 April 2026 00:37:47 +0000 (0:00:00.104) 0:00:02.766 ******** 2026-04-11 00:37:52.349173 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:37:52.349178 | orchestrator | 2026-04-11 00:37:52.349193 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 00:37:52.349215 | orchestrator | Saturday 11 April 2026 00:37:47 +0000 (0:00:00.101) 0:00:02.867 ******** 2026-04-11 00:37:52.349220 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:37:52.349224 | orchestrator | 2026-04-11 00:37:52.349231 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 00:37:52.349239 | orchestrator | Saturday 11 April 2026 00:37:48 +0000 (0:00:01.021) 0:00:03.889 ******** 2026-04-11 00:37:52.349249 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:37:52.349259 | orchestrator | 2026-04-11 00:37:52.349266 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 00:37:52.349273 | orchestrator | 2026-04-11 00:37:52.349280 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 00:37:52.349325 | orchestrator | Saturday 11 April 2026 00:37:48 +0000 (0:00:00.095) 0:00:03.984 ******** 2026-04-11 00:37:52.349332 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:37:52.349338 | orchestrator | 2026-04-11 00:37:52.349346 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 00:37:52.349352 | orchestrator | Saturday 11 April 2026 00:37:48 +0000 (0:00:00.087) 0:00:04.072 ******** 2026-04-11 00:37:52.349358 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:37:52.349365 | orchestrator | 2026-04-11 00:37:52.349373 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 00:37:52.349380 | orchestrator | Saturday 11 April 2026 00:37:49 +0000 (0:00:01.000) 0:00:05.072 ******** 2026-04-11 00:37:52.349387 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:37:52.349394 | orchestrator | 2026-04-11 00:37:52.349400 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 00:37:52.349406 | orchestrator | 2026-04-11 00:37:52.349412 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 00:37:52.349419 | orchestrator | Saturday 11 April 2026 00:37:49 +0000 (0:00:00.113) 0:00:05.186 ******** 2026-04-11 00:37:52.349426 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:37:52.349433 | orchestrator | 2026-04-11 00:37:52.349439 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 00:37:52.349445 | orchestrator | Saturday 11 April 2026 00:37:49 +0000 (0:00:00.085) 0:00:05.272 ******** 2026-04-11 00:37:52.349452 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:37:52.349458 | orchestrator | 2026-04-11 00:37:52.349465 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 00:37:52.349472 | orchestrator | Saturday 11 April 2026 00:37:50 +0000 (0:00:00.995) 0:00:06.267 ******** 2026-04-11 00:37:52.349478 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:37:52.349485 | orchestrator | 2026-04-11 00:37:52.349491 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 00:37:52.349498 | orchestrator | 2026-04-11 00:37:52.349505 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 00:37:52.349512 | orchestrator | Saturday 11 April 2026 00:37:51 +0000 (0:00:00.098) 0:00:06.366 ******** 2026-04-11 00:37:52.349518 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:37:52.349525 | orchestrator | 2026-04-11 00:37:52.349532 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 00:37:52.349539 | orchestrator | Saturday 11 April 2026 00:37:51 +0000 (0:00:00.101) 0:00:06.468 ******** 2026-04-11 00:37:52.349546 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:37:52.349553 | orchestrator | 2026-04-11 00:37:52.349561 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 00:37:52.349568 | orchestrator | Saturday 11 April 2026 00:37:52 +0000 (0:00:01.001) 0:00:07.469 ******** 2026-04-11 00:37:52.349590 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:37:52.349610 | orchestrator | 2026-04-11 00:37:52.349617 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:37:52.349625 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:52.349642 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:52.349650 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:52.349658 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:52.349664 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:52.349671 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:37:52.349677 | orchestrator | 2026-04-11 00:37:52.349683 | orchestrator | 2026-04-11 00:37:52.349690 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:37:52.349697 | orchestrator | Saturday 11 April 2026 00:37:52 +0000 (0:00:00.031) 0:00:07.501 ******** 2026-04-11 00:37:52.349703 | orchestrator | =============================================================================== 2026-04-11 00:37:52.349710 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.21s 2026-04-11 00:37:52.349716 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.57s 2026-04-11 00:37:52.349724 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-04-11 00:37:52.468633 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-11 00:38:03.657526 | orchestrator | 2026-04-11 00:38:03 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-11 00:38:03.722438 | orchestrator | 2026-04-11 00:38:03 | INFO  | Task c8cf5be5-a24a-4609-a743-489d56346858 (wait-for-connection) was prepared for execution. 2026-04-11 00:38:03.722532 | orchestrator | 2026-04-11 00:38:03 | INFO  | It takes a moment until task c8cf5be5-a24a-4609-a743-489d56346858 (wait-for-connection) has been started and output is visible here. 2026-04-11 00:38:18.413731 | orchestrator | 2026-04-11 00:38:18.414590 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-11 00:38:18.414629 | orchestrator | 2026-04-11 00:38:18.414643 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-11 00:38:18.414657 | orchestrator | Saturday 11 April 2026 00:38:06 +0000 (0:00:00.275) 0:00:00.275 ******** 2026-04-11 00:38:18.414669 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:38:18.414682 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:38:18.414693 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:38:18.414705 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:38:18.414716 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:38:18.414727 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:38:18.414739 | orchestrator | 2026-04-11 00:38:18.414750 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:38:18.414763 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:38:18.414777 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:38:18.414788 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:38:18.414800 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:38:18.414811 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:38:18.414851 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:38:18.414864 | orchestrator | 2026-04-11 00:38:18.414876 | orchestrator | 2026-04-11 00:38:18.414887 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:38:18.414899 | orchestrator | Saturday 11 April 2026 00:38:18 +0000 (0:00:11.489) 0:00:11.765 ******** 2026-04-11 00:38:18.414910 | orchestrator | =============================================================================== 2026-04-11 00:38:18.414921 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.49s 2026-04-11 00:38:18.578573 | orchestrator | + osism apply hddtemp 2026-04-11 00:38:29.841923 | orchestrator | 2026-04-11 00:38:29 | INFO  | Prepare task for execution of hddtemp. 2026-04-11 00:38:29.916504 | orchestrator | 2026-04-11 00:38:29 | INFO  | Task d1982562-bf2f-4727-b445-cf4b430b17c0 (hddtemp) was prepared for execution. 2026-04-11 00:38:29.916604 | orchestrator | 2026-04-11 00:38:29 | INFO  | It takes a moment until task d1982562-bf2f-4727-b445-cf4b430b17c0 (hddtemp) has been started and output is visible here. 2026-04-11 00:38:57.527561 | orchestrator | 2026-04-11 00:38:57.527675 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-11 00:38:57.527693 | orchestrator | 2026-04-11 00:38:57.527705 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-11 00:38:57.527718 | orchestrator | Saturday 11 April 2026 00:38:33 +0000 (0:00:00.334) 0:00:00.334 ******** 2026-04-11 00:38:57.527730 | orchestrator | ok: [testbed-manager] 2026-04-11 00:38:57.527742 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:38:57.527754 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:38:57.527765 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:38:57.527776 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:38:57.527787 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:38:57.527798 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:38:57.527809 | orchestrator | 2026-04-11 00:38:57.527820 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-11 00:38:57.527832 | orchestrator | Saturday 11 April 2026 00:38:33 +0000 (0:00:00.577) 0:00:00.911 ******** 2026-04-11 00:38:57.527845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:38:57.527858 | orchestrator | 2026-04-11 00:38:57.527870 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-11 00:38:57.527881 | orchestrator | Saturday 11 April 2026 00:38:34 +0000 (0:00:01.104) 0:00:02.016 ******** 2026-04-11 00:38:57.527892 | orchestrator | ok: [testbed-manager] 2026-04-11 00:38:57.527904 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:38:57.527915 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:38:57.527926 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:38:57.527937 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:38:57.527948 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:38:57.527959 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:38:57.527970 | orchestrator | 2026-04-11 00:38:57.527982 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-11 00:38:57.527993 | orchestrator | Saturday 11 April 2026 00:38:37 +0000 (0:00:02.399) 0:00:04.415 ******** 2026-04-11 00:38:57.528004 | orchestrator | changed: [testbed-manager] 2026-04-11 00:38:57.528016 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:38:57.528028 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:38:57.528040 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:38:57.528051 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:38:57.528078 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:38:57.528090 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:38:57.528103 | orchestrator | 2026-04-11 00:38:57.528116 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-11 00:38:57.528152 | orchestrator | Saturday 11 April 2026 00:38:38 +0000 (0:00:00.932) 0:00:05.348 ******** 2026-04-11 00:38:57.528229 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:38:57.528241 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:38:57.528253 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:38:57.528266 | orchestrator | ok: [testbed-manager] 2026-04-11 00:38:57.528278 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:38:57.528290 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:38:57.528302 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:38:57.528314 | orchestrator | 2026-04-11 00:38:57.528327 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-11 00:38:57.528339 | orchestrator | Saturday 11 April 2026 00:38:39 +0000 (0:00:01.298) 0:00:06.647 ******** 2026-04-11 00:38:57.528351 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:38:57.528364 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:38:57.528376 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:38:57.528387 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:38:57.528401 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:38:57.528413 | orchestrator | changed: [testbed-manager] 2026-04-11 00:38:57.528425 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:38:57.528437 | orchestrator | 2026-04-11 00:38:57.528450 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-11 00:38:57.528462 | orchestrator | Saturday 11 April 2026 00:38:39 +0000 (0:00:00.563) 0:00:07.211 ******** 2026-04-11 00:38:57.528472 | orchestrator | changed: [testbed-manager] 2026-04-11 00:38:57.528483 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:38:57.528493 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:38:57.528504 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:38:57.528514 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:38:57.528525 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:38:57.528536 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:38:57.528547 | orchestrator | 2026-04-11 00:38:57.528558 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-11 00:38:57.528568 | orchestrator | Saturday 11 April 2026 00:38:53 +0000 (0:00:13.799) 0:00:21.010 ******** 2026-04-11 00:38:57.528579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:38:57.528591 | orchestrator | 2026-04-11 00:38:57.528601 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-11 00:38:57.528612 | orchestrator | Saturday 11 April 2026 00:38:54 +0000 (0:00:01.130) 0:00:22.141 ******** 2026-04-11 00:38:57.528623 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:38:57.528633 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:38:57.528644 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:38:57.528654 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:38:57.528665 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:38:57.528675 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:38:57.528686 | orchestrator | changed: [testbed-manager] 2026-04-11 00:38:57.528697 | orchestrator | 2026-04-11 00:38:57.528707 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:38:57.528718 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:38:57.528749 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:38:57.528761 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:38:57.528772 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:38:57.528791 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:38:57.528802 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:38:57.528813 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:38:57.528823 | orchestrator | 2026-04-11 00:38:57.528834 | orchestrator | 2026-04-11 00:38:57.528845 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:38:57.528856 | orchestrator | Saturday 11 April 2026 00:38:57 +0000 (0:00:02.354) 0:00:24.495 ******** 2026-04-11 00:38:57.528867 | orchestrator | =============================================================================== 2026-04-11 00:38:57.528877 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.80s 2026-04-11 00:38:57.528888 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.40s 2026-04-11 00:38:57.528899 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.35s 2026-04-11 00:38:57.528910 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.30s 2026-04-11 00:38:57.528921 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.13s 2026-04-11 00:38:57.528931 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.10s 2026-04-11 00:38:57.528942 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.93s 2026-04-11 00:38:57.528953 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.58s 2026-04-11 00:38:57.528964 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.56s 2026-04-11 00:38:57.687021 | orchestrator | ++ semver 10.0.0 7.1.1 2026-04-11 00:38:57.738505 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 00:38:57.738591 | orchestrator | + sudo systemctl restart manager.service 2026-04-11 00:39:10.792002 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 00:39:10.792224 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-11 00:39:10.792259 | orchestrator | + local max_attempts=60 2026-04-11 00:39:10.792273 | orchestrator | + local name=ceph-ansible 2026-04-11 00:39:10.792284 | orchestrator | + local attempt_num=1 2026-04-11 00:39:10.792296 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:10.827003 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:10.827111 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:10.827134 | orchestrator | + sleep 5 2026-04-11 00:39:15.830496 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:15.905593 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:15.905687 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:15.905701 | orchestrator | + sleep 5 2026-04-11 00:39:20.922453 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:20.957021 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:20.957111 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:20.957156 | orchestrator | + sleep 5 2026-04-11 00:39:25.962803 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:26.005020 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:26.005106 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:26.005146 | orchestrator | + sleep 5 2026-04-11 00:39:31.010344 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:31.049261 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:31.049372 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:31.049397 | orchestrator | + sleep 5 2026-04-11 00:39:36.053727 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:36.093202 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:36.093280 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:36.093294 | orchestrator | + sleep 5 2026-04-11 00:39:41.097919 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:41.130382 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:41.130474 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:41.130489 | orchestrator | + sleep 5 2026-04-11 00:39:46.134999 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:46.171505 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:46.171602 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:46.171617 | orchestrator | + sleep 5 2026-04-11 00:39:51.174286 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:51.205726 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:51.205840 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:51.205864 | orchestrator | + sleep 5 2026-04-11 00:39:56.209805 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:39:56.247553 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 00:39:56.247642 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:39:56.247656 | orchestrator | + sleep 5 2026-04-11 00:40:01.252220 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:40:01.285977 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 00:40:01.286147 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:40:01.286161 | orchestrator | + sleep 5 2026-04-11 00:40:06.289454 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:40:06.319412 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 00:40:06.319490 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:40:06.319499 | orchestrator | + sleep 5 2026-04-11 00:40:11.323208 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:40:11.358257 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 00:40:11.358316 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 00:40:11.358322 | orchestrator | + sleep 5 2026-04-11 00:40:16.362862 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 00:40:16.400594 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:40:16.400634 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-11 00:40:16.400640 | orchestrator | + local max_attempts=60 2026-04-11 00:40:16.400645 | orchestrator | + local name=kolla-ansible 2026-04-11 00:40:16.400650 | orchestrator | + local attempt_num=1 2026-04-11 00:40:16.400762 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-11 00:40:16.428692 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:40:16.428750 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-11 00:40:16.428755 | orchestrator | + local max_attempts=60 2026-04-11 00:40:16.428764 | orchestrator | + local name=osism-ansible 2026-04-11 00:40:16.428769 | orchestrator | + local attempt_num=1 2026-04-11 00:40:16.429468 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-11 00:40:16.454648 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 00:40:16.454681 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-11 00:40:16.454686 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-11 00:40:16.583571 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-11 00:40:16.746035 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-11 00:40:16.907246 | orchestrator | ARA in osism-ansible already disabled. 2026-04-11 00:40:17.052768 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-11 00:40:17.053301 | orchestrator | + osism apply gather-facts 2026-04-11 00:40:28.196187 | orchestrator | 2026-04-11 00:40:28 | INFO  | Prepare task for execution of gather-facts. 2026-04-11 00:40:28.253203 | orchestrator | 2026-04-11 00:40:28 | INFO  | Task bc4c8143-4ac4-4423-a2ac-733385a7f17e (gather-facts) was prepared for execution. 2026-04-11 00:40:28.253305 | orchestrator | 2026-04-11 00:40:28 | INFO  | It takes a moment until task bc4c8143-4ac4-4423-a2ac-733385a7f17e (gather-facts) has been started and output is visible here. 2026-04-11 00:40:40.351327 | orchestrator | 2026-04-11 00:40:40.351403 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 00:40:40.351410 | orchestrator | 2026-04-11 00:40:40.351415 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 00:40:40.351439 | orchestrator | Saturday 11 April 2026 00:40:31 +0000 (0:00:00.250) 0:00:00.250 ******** 2026-04-11 00:40:40.351443 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:40:40.351448 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:40:40.351452 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:40:40.351468 | orchestrator | ok: [testbed-manager] 2026-04-11 00:40:40.351471 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:40:40.351475 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:40:40.351479 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:40:40.351484 | orchestrator | 2026-04-11 00:40:40.351488 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 00:40:40.351492 | orchestrator | 2026-04-11 00:40:40.351496 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 00:40:40.351500 | orchestrator | Saturday 11 April 2026 00:40:39 +0000 (0:00:08.479) 0:00:08.729 ******** 2026-04-11 00:40:40.351504 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:40:40.351509 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:40:40.351513 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:40:40.351517 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:40:40.351521 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:40:40.351525 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:40:40.351529 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:40:40.351533 | orchestrator | 2026-04-11 00:40:40.351537 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:40:40.351541 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:40:40.351547 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:40:40.351551 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:40:40.351555 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:40:40.351559 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:40:40.351563 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:40:40.351567 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 00:40:40.351571 | orchestrator | 2026-04-11 00:40:40.351575 | orchestrator | 2026-04-11 00:40:40.351579 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:40:40.351583 | orchestrator | Saturday 11 April 2026 00:40:40 +0000 (0:00:00.541) 0:00:09.271 ******** 2026-04-11 00:40:40.351587 | orchestrator | =============================================================================== 2026-04-11 00:40:40.351591 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.48s 2026-04-11 00:40:40.351595 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-04-11 00:40:40.472859 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-11 00:40:40.488325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-11 00:40:40.505789 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-11 00:40:40.524403 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-11 00:40:40.541652 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-11 00:40:40.558259 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-11 00:40:40.576082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-11 00:40:40.588180 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-11 00:40:40.605110 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-11 00:40:40.621199 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-11 00:40:40.640224 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-11 00:40:40.651457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-11 00:40:40.661868 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-11 00:40:40.671817 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-11 00:40:40.686856 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-11 00:40:40.695813 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-11 00:40:40.709624 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-11 00:40:40.721375 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-11 00:40:40.743501 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-11 00:40:40.760922 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-11 00:40:40.775415 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-11 00:40:40.792616 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-11 00:40:40.809908 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-11 00:40:40.827742 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-11 00:40:41.143145 | orchestrator | ok: Runtime: 0:23:36.560719 2026-04-11 00:40:41.279481 | 2026-04-11 00:40:41.279644 | TASK [Deploy services] 2026-04-11 00:40:41.813626 | orchestrator | skipping: Conditional result was False 2026-04-11 00:40:41.832557 | 2026-04-11 00:40:41.832777 | TASK [Deploy in a nutshell] 2026-04-11 00:40:42.518504 | orchestrator | + set -e 2026-04-11 00:40:42.518683 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 00:40:42.519159 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 00:40:42.519240 | orchestrator | ++ INTERACTIVE=false 2026-04-11 00:40:42.519253 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 00:40:42.519261 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 00:40:42.519325 | orchestrator | 2026-04-11 00:40:42.519357 | orchestrator | # PULL IMAGES 2026-04-11 00:40:42.519372 | orchestrator | 2026-04-11 00:40:42.519396 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 00:40:42.519404 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 00:40:42.519416 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 00:40:42.519423 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 00:40:42.519434 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 00:40:42.519442 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 00:40:42.519453 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 00:40:42.519459 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 00:40:42.519469 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 00:40:42.519476 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 00:40:42.519484 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 00:40:42.519491 | orchestrator | ++ export ARA=false 2026-04-11 00:40:42.519498 | orchestrator | ++ ARA=false 2026-04-11 00:40:42.519505 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 00:40:42.519512 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 00:40:42.519518 | orchestrator | ++ export TEMPEST=true 2026-04-11 00:40:42.519525 | orchestrator | ++ TEMPEST=true 2026-04-11 00:40:42.519532 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 00:40:42.519538 | orchestrator | ++ IS_ZUUL=true 2026-04-11 00:40:42.519545 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.75 2026-04-11 00:40:42.519551 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.75 2026-04-11 00:40:42.519558 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 00:40:42.519564 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 00:40:42.519571 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 00:40:42.519578 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 00:40:42.519585 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 00:40:42.519591 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 00:40:42.519598 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 00:40:42.519605 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 00:40:42.519611 | orchestrator | + echo 2026-04-11 00:40:42.519618 | orchestrator | + echo '# PULL IMAGES' 2026-04-11 00:40:42.519625 | orchestrator | + echo 2026-04-11 00:40:42.521263 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-11 00:40:42.571406 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 00:40:42.571485 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-11 00:40:43.672252 | orchestrator | 2026-04-11 00:40:43 | INFO  | Trying to run play pull-images in environment custom 2026-04-11 00:40:53.696337 | orchestrator | 2026-04-11 00:40:53 | INFO  | Prepare task for execution of pull-images. 2026-04-11 00:40:53.765099 | orchestrator | 2026-04-11 00:40:53 | INFO  | Task d9069a12-3801-4770-8821-a6d0c2e5920f (pull-images) was prepared for execution. 2026-04-11 00:40:53.766780 | orchestrator | 2026-04-11 00:40:53 | INFO  | Task d9069a12-3801-4770-8821-a6d0c2e5920f is running in background. No more output. Check ARA for logs. 2026-04-11 00:40:55.109633 | orchestrator | 2026-04-11 00:40:55 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-11 00:41:05.164213 | orchestrator | 2026-04-11 00:41:05 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-11 00:41:05.231217 | orchestrator | 2026-04-11 00:41:05 | INFO  | Task ad5ae8c3-0aea-47d5-a859-20fc075f8086 (wipe-partitions) was prepared for execution. 2026-04-11 00:41:05.231321 | orchestrator | 2026-04-11 00:41:05 | INFO  | It takes a moment until task ad5ae8c3-0aea-47d5-a859-20fc075f8086 (wipe-partitions) has been started and output is visible here. 2026-04-11 00:41:17.428173 | orchestrator | 2026-04-11 00:41:17.428273 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-11 00:41:17.428289 | orchestrator | 2026-04-11 00:41:17.428300 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-11 00:41:17.428319 | orchestrator | Saturday 11 April 2026 00:41:08 +0000 (0:00:00.147) 0:00:00.147 ******** 2026-04-11 00:41:17.428348 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:41:17.428409 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:41:17.428431 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:41:17.428450 | orchestrator | 2026-04-11 00:41:17.428469 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-11 00:41:17.428490 | orchestrator | Saturday 11 April 2026 00:41:08 +0000 (0:00:00.943) 0:00:01.090 ******** 2026-04-11 00:41:17.428510 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:17.428536 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:41:17.428548 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:41:17.428559 | orchestrator | 2026-04-11 00:41:17.428570 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-11 00:41:17.428581 | orchestrator | Saturday 11 April 2026 00:41:09 +0000 (0:00:00.229) 0:00:01.319 ******** 2026-04-11 00:41:17.428593 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:41:17.428604 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:41:17.428615 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:41:17.428626 | orchestrator | 2026-04-11 00:41:17.428638 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-11 00:41:17.428649 | orchestrator | Saturday 11 April 2026 00:41:09 +0000 (0:00:00.533) 0:00:01.853 ******** 2026-04-11 00:41:17.428660 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:17.428671 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:41:17.428681 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:41:17.428692 | orchestrator | 2026-04-11 00:41:17.428703 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-11 00:41:17.428714 | orchestrator | Saturday 11 April 2026 00:41:09 +0000 (0:00:00.238) 0:00:02.091 ******** 2026-04-11 00:41:17.428729 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-11 00:41:17.428746 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-11 00:41:17.428759 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-11 00:41:17.428772 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-11 00:41:17.428784 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-11 00:41:17.428796 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-11 00:41:17.428809 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-11 00:41:17.428821 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-11 00:41:17.428834 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-11 00:41:17.428847 | orchestrator | 2026-04-11 00:41:17.428860 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-11 00:41:17.428879 | orchestrator | Saturday 11 April 2026 00:41:11 +0000 (0:00:01.317) 0:00:03.409 ******** 2026-04-11 00:41:17.428906 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-11 00:41:17.428930 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-11 00:41:17.428946 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-11 00:41:17.428963 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-11 00:41:17.428980 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-11 00:41:17.429062 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-11 00:41:17.429084 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-11 00:41:17.429101 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-11 00:41:17.429125 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-11 00:41:17.429142 | orchestrator | 2026-04-11 00:41:17.429159 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-11 00:41:17.429177 | orchestrator | Saturday 11 April 2026 00:41:12 +0000 (0:00:01.349) 0:00:04.759 ******** 2026-04-11 00:41:17.429195 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-11 00:41:17.429212 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-11 00:41:17.429232 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-11 00:41:17.429249 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-11 00:41:17.429269 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-11 00:41:17.429294 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-11 00:41:17.429306 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-11 00:41:17.429317 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-11 00:41:17.429328 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-11 00:41:17.429339 | orchestrator | 2026-04-11 00:41:17.429350 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-11 00:41:17.429361 | orchestrator | Saturday 11 April 2026 00:41:15 +0000 (0:00:03.192) 0:00:07.952 ******** 2026-04-11 00:41:17.429373 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:41:17.429393 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:41:17.429420 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:41:17.429441 | orchestrator | 2026-04-11 00:41:17.429458 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-11 00:41:17.429477 | orchestrator | Saturday 11 April 2026 00:41:16 +0000 (0:00:00.594) 0:00:08.546 ******** 2026-04-11 00:41:17.429497 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:41:17.429515 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:41:17.429532 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:41:17.429543 | orchestrator | 2026-04-11 00:41:17.429557 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:41:17.429569 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:17.429582 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:17.429614 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:17.429625 | orchestrator | 2026-04-11 00:41:17.429636 | orchestrator | 2026-04-11 00:41:17.429647 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:41:17.429658 | orchestrator | Saturday 11 April 2026 00:41:17 +0000 (0:00:00.793) 0:00:09.340 ******** 2026-04-11 00:41:17.429669 | orchestrator | =============================================================================== 2026-04-11 00:41:17.429680 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.19s 2026-04-11 00:41:17.429691 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.35s 2026-04-11 00:41:17.429702 | orchestrator | Check device availability ----------------------------------------------- 1.32s 2026-04-11 00:41:17.429713 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.94s 2026-04-11 00:41:17.429724 | orchestrator | Request device events from the kernel ----------------------------------- 0.79s 2026-04-11 00:41:17.429734 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-04-11 00:41:17.429745 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.53s 2026-04-11 00:41:17.429756 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-04-11 00:41:17.429767 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2026-04-11 00:41:28.991854 | orchestrator | 2026-04-11 00:41:28 | INFO  | Prepare task for execution of facts. 2026-04-11 00:41:29.063376 | orchestrator | 2026-04-11 00:41:29 | INFO  | Task 0c56ad2f-1c95-4382-9c07-591f7913a357 (facts) was prepared for execution. 2026-04-11 00:41:29.063460 | orchestrator | 2026-04-11 00:41:29 | INFO  | It takes a moment until task 0c56ad2f-1c95-4382-9c07-591f7913a357 (facts) has been started and output is visible here. 2026-04-11 00:41:39.787647 | orchestrator | 2026-04-11 00:41:39.787759 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-11 00:41:39.787775 | orchestrator | 2026-04-11 00:41:39.787788 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 00:41:39.787827 | orchestrator | Saturday 11 April 2026 00:41:32 +0000 (0:00:00.337) 0:00:00.337 ******** 2026-04-11 00:41:39.787840 | orchestrator | ok: [testbed-manager] 2026-04-11 00:41:39.787852 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:41:39.787863 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:41:39.787873 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:41:39.787884 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:41:39.787895 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:41:39.787905 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:41:39.787916 | orchestrator | 2026-04-11 00:41:39.787927 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 00:41:39.787938 | orchestrator | Saturday 11 April 2026 00:41:33 +0000 (0:00:01.291) 0:00:01.628 ******** 2026-04-11 00:41:39.787949 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:41:39.787960 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:41:39.787972 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:41:39.788103 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:41:39.788212 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:39.788236 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:41:39.788254 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:41:39.788273 | orchestrator | 2026-04-11 00:41:39.788292 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 00:41:39.788310 | orchestrator | 2026-04-11 00:41:39.788328 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 00:41:39.788346 | orchestrator | Saturday 11 April 2026 00:41:34 +0000 (0:00:01.082) 0:00:02.711 ******** 2026-04-11 00:41:39.788363 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:41:39.788382 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:41:39.788401 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:41:39.788419 | orchestrator | ok: [testbed-manager] 2026-04-11 00:41:39.788437 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:41:39.788456 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:41:39.788474 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:41:39.788493 | orchestrator | 2026-04-11 00:41:39.788505 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 00:41:39.788516 | orchestrator | 2026-04-11 00:41:39.788527 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 00:41:39.788537 | orchestrator | Saturday 11 April 2026 00:41:39 +0000 (0:00:04.610) 0:00:07.321 ******** 2026-04-11 00:41:39.788548 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:41:39.788559 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:41:39.788570 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:41:39.788580 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:41:39.788591 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:39.788601 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:41:39.788613 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:41:39.788623 | orchestrator | 2026-04-11 00:41:39.788634 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:41:39.788645 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:39.788657 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:39.788668 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:39.788679 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:39.788689 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:39.788700 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:39.788724 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:41:39.788735 | orchestrator | 2026-04-11 00:41:39.788746 | orchestrator | 2026-04-11 00:41:39.788757 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:41:39.788768 | orchestrator | Saturday 11 April 2026 00:41:39 +0000 (0:00:00.447) 0:00:07.768 ******** 2026-04-11 00:41:39.788779 | orchestrator | =============================================================================== 2026-04-11 00:41:39.788789 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.61s 2026-04-11 00:41:39.788800 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-04-11 00:41:39.788810 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2026-04-11 00:41:39.788821 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-04-11 00:41:41.071324 | orchestrator | 2026-04-11 00:41:41 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-11 00:41:41.128094 | orchestrator | 2026-04-11 00:41:41 | INFO  | Task cd9c427a-0e02-45b7-810e-13e670a0f3da (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-11 00:41:41.128161 | orchestrator | 2026-04-11 00:41:41 | INFO  | It takes a moment until task cd9c427a-0e02-45b7-810e-13e670a0f3da (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-11 00:41:51.582683 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 00:41:51.582776 | orchestrator | 2.16.14 2026-04-11 00:41:51.582789 | orchestrator | 2026-04-11 00:41:51.582799 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-11 00:41:51.582809 | orchestrator | 2026-04-11 00:41:51.582817 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 00:41:51.582825 | orchestrator | Saturday 11 April 2026 00:41:45 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-04-11 00:41:51.582834 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 00:41:51.582872 | orchestrator | 2026-04-11 00:41:51.582881 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 00:41:51.582889 | orchestrator | Saturday 11 April 2026 00:41:45 +0000 (0:00:00.214) 0:00:00.484 ******** 2026-04-11 00:41:51.582898 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:41:51.582907 | orchestrator | 2026-04-11 00:41:51.582915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.582924 | orchestrator | Saturday 11 April 2026 00:41:45 +0000 (0:00:00.198) 0:00:00.683 ******** 2026-04-11 00:41:51.582932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-11 00:41:51.582940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-11 00:41:51.582949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-11 00:41:51.582957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-11 00:41:51.583020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-11 00:41:51.583036 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-11 00:41:51.583050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-11 00:41:51.583059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-11 00:41:51.583068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-11 00:41:51.583075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-11 00:41:51.583102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-11 00:41:51.583111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-11 00:41:51.583119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-11 00:41:51.583127 | orchestrator | 2026-04-11 00:41:51.583135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583143 | orchestrator | Saturday 11 April 2026 00:41:45 +0000 (0:00:00.310) 0:00:00.994 ******** 2026-04-11 00:41:51.583151 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583159 | orchestrator | 2026-04-11 00:41:51.583167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583174 | orchestrator | Saturday 11 April 2026 00:41:46 +0000 (0:00:00.366) 0:00:01.360 ******** 2026-04-11 00:41:51.583182 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583190 | orchestrator | 2026-04-11 00:41:51.583198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583205 | orchestrator | Saturday 11 April 2026 00:41:46 +0000 (0:00:00.172) 0:00:01.532 ******** 2026-04-11 00:41:51.583218 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583226 | orchestrator | 2026-04-11 00:41:51.583235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583244 | orchestrator | Saturday 11 April 2026 00:41:46 +0000 (0:00:00.174) 0:00:01.706 ******** 2026-04-11 00:41:51.583253 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583262 | orchestrator | 2026-04-11 00:41:51.583271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583281 | orchestrator | Saturday 11 April 2026 00:41:46 +0000 (0:00:00.170) 0:00:01.877 ******** 2026-04-11 00:41:51.583290 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583299 | orchestrator | 2026-04-11 00:41:51.583308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583317 | orchestrator | Saturday 11 April 2026 00:41:47 +0000 (0:00:00.172) 0:00:02.049 ******** 2026-04-11 00:41:51.583326 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583334 | orchestrator | 2026-04-11 00:41:51.583343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583352 | orchestrator | Saturday 11 April 2026 00:41:47 +0000 (0:00:00.163) 0:00:02.212 ******** 2026-04-11 00:41:51.583361 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583369 | orchestrator | 2026-04-11 00:41:51.583378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583387 | orchestrator | Saturday 11 April 2026 00:41:47 +0000 (0:00:00.192) 0:00:02.405 ******** 2026-04-11 00:41:51.583396 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583404 | orchestrator | 2026-04-11 00:41:51.583413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583426 | orchestrator | Saturday 11 April 2026 00:41:47 +0000 (0:00:00.164) 0:00:02.569 ******** 2026-04-11 00:41:51.583436 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b) 2026-04-11 00:41:51.583446 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b) 2026-04-11 00:41:51.583455 | orchestrator | 2026-04-11 00:41:51.583464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583487 | orchestrator | Saturday 11 April 2026 00:41:47 +0000 (0:00:00.374) 0:00:02.944 ******** 2026-04-11 00:41:51.583496 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684) 2026-04-11 00:41:51.583505 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684) 2026-04-11 00:41:51.583514 | orchestrator | 2026-04-11 00:41:51.583523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583532 | orchestrator | Saturday 11 April 2026 00:41:48 +0000 (0:00:00.383) 0:00:03.327 ******** 2026-04-11 00:41:51.583549 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08) 2026-04-11 00:41:51.583559 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08) 2026-04-11 00:41:51.583573 | orchestrator | 2026-04-11 00:41:51.583585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583597 | orchestrator | Saturday 11 April 2026 00:41:48 +0000 (0:00:00.537) 0:00:03.865 ******** 2026-04-11 00:41:51.583610 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547) 2026-04-11 00:41:51.583623 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547) 2026-04-11 00:41:51.583636 | orchestrator | 2026-04-11 00:41:51.583649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:41:51.583662 | orchestrator | Saturday 11 April 2026 00:41:49 +0000 (0:00:00.546) 0:00:04.411 ******** 2026-04-11 00:41:51.583676 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 00:41:51.583684 | orchestrator | 2026-04-11 00:41:51.583692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:51.583700 | orchestrator | Saturday 11 April 2026 00:41:49 +0000 (0:00:00.589) 0:00:05.001 ******** 2026-04-11 00:41:51.583708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-11 00:41:51.583715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-11 00:41:51.583724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-11 00:41:51.583731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-11 00:41:51.583739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-11 00:41:51.583747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-11 00:41:51.583754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-11 00:41:51.583762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-11 00:41:51.583770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-11 00:41:51.583778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-11 00:41:51.583785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-11 00:41:51.583793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-11 00:41:51.583801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-11 00:41:51.583809 | orchestrator | 2026-04-11 00:41:51.583816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:51.583824 | orchestrator | Saturday 11 April 2026 00:41:50 +0000 (0:00:00.340) 0:00:05.341 ******** 2026-04-11 00:41:51.583832 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583840 | orchestrator | 2026-04-11 00:41:51.583848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:51.583855 | orchestrator | Saturday 11 April 2026 00:41:50 +0000 (0:00:00.211) 0:00:05.553 ******** 2026-04-11 00:41:51.583863 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583876 | orchestrator | 2026-04-11 00:41:51.583888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:51.583901 | orchestrator | Saturday 11 April 2026 00:41:50 +0000 (0:00:00.171) 0:00:05.724 ******** 2026-04-11 00:41:51.583914 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.583927 | orchestrator | 2026-04-11 00:41:51.583939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:51.583960 | orchestrator | Saturday 11 April 2026 00:41:50 +0000 (0:00:00.199) 0:00:05.924 ******** 2026-04-11 00:41:51.583994 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.584007 | orchestrator | 2026-04-11 00:41:51.584020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:51.584033 | orchestrator | Saturday 11 April 2026 00:41:51 +0000 (0:00:00.175) 0:00:06.099 ******** 2026-04-11 00:41:51.584046 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.584059 | orchestrator | 2026-04-11 00:41:51.584073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:51.584082 | orchestrator | Saturday 11 April 2026 00:41:51 +0000 (0:00:00.175) 0:00:06.274 ******** 2026-04-11 00:41:51.584090 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.584098 | orchestrator | 2026-04-11 00:41:51.584111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:51.584119 | orchestrator | Saturday 11 April 2026 00:41:51 +0000 (0:00:00.163) 0:00:06.438 ******** 2026-04-11 00:41:51.584127 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:51.584135 | orchestrator | 2026-04-11 00:41:51.584150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:58.639123 | orchestrator | Saturday 11 April 2026 00:41:51 +0000 (0:00:00.161) 0:00:06.600 ******** 2026-04-11 00:41:58.639233 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.639249 | orchestrator | 2026-04-11 00:41:58.639262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:58.639274 | orchestrator | Saturday 11 April 2026 00:41:51 +0000 (0:00:00.182) 0:00:06.782 ******** 2026-04-11 00:41:58.639286 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-11 00:41:58.639299 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-11 00:41:58.639311 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-11 00:41:58.639323 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-11 00:41:58.639334 | orchestrator | 2026-04-11 00:41:58.639346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:58.639358 | orchestrator | Saturday 11 April 2026 00:41:52 +0000 (0:00:00.796) 0:00:07.579 ******** 2026-04-11 00:41:58.639370 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.639381 | orchestrator | 2026-04-11 00:41:58.639393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:58.639405 | orchestrator | Saturday 11 April 2026 00:41:52 +0000 (0:00:00.196) 0:00:07.776 ******** 2026-04-11 00:41:58.639417 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.639428 | orchestrator | 2026-04-11 00:41:58.639440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:58.639452 | orchestrator | Saturday 11 April 2026 00:41:52 +0000 (0:00:00.200) 0:00:07.977 ******** 2026-04-11 00:41:58.639463 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.639475 | orchestrator | 2026-04-11 00:41:58.639487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:41:58.639498 | orchestrator | Saturday 11 April 2026 00:41:53 +0000 (0:00:00.186) 0:00:08.163 ******** 2026-04-11 00:41:58.639510 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.639522 | orchestrator | 2026-04-11 00:41:58.639533 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-11 00:41:58.639545 | orchestrator | Saturday 11 April 2026 00:41:53 +0000 (0:00:00.180) 0:00:08.343 ******** 2026-04-11 00:41:58.639557 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-11 00:41:58.639569 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-11 00:41:58.639580 | orchestrator | 2026-04-11 00:41:58.639592 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-11 00:41:58.639604 | orchestrator | Saturday 11 April 2026 00:41:53 +0000 (0:00:00.153) 0:00:08.497 ******** 2026-04-11 00:41:58.639617 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.639652 | orchestrator | 2026-04-11 00:41:58.639666 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-11 00:41:58.639680 | orchestrator | Saturday 11 April 2026 00:41:53 +0000 (0:00:00.115) 0:00:08.612 ******** 2026-04-11 00:41:58.639693 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.639706 | orchestrator | 2026-04-11 00:41:58.639720 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-11 00:41:58.639733 | orchestrator | Saturday 11 April 2026 00:41:53 +0000 (0:00:00.103) 0:00:08.715 ******** 2026-04-11 00:41:58.639746 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.639758 | orchestrator | 2026-04-11 00:41:58.639772 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-11 00:41:58.639787 | orchestrator | Saturday 11 April 2026 00:41:53 +0000 (0:00:00.112) 0:00:08.828 ******** 2026-04-11 00:41:58.639800 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:41:58.639813 | orchestrator | 2026-04-11 00:41:58.639827 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-11 00:41:58.639840 | orchestrator | Saturday 11 April 2026 00:41:53 +0000 (0:00:00.112) 0:00:08.940 ******** 2026-04-11 00:41:58.639854 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e341cbc8-c5ec-51c3-80df-3957b9d9208d'}}) 2026-04-11 00:41:58.639868 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '11d3af0d-f4a7-5343-8eab-aab153531328'}}) 2026-04-11 00:41:58.639882 | orchestrator | 2026-04-11 00:41:58.639895 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-11 00:41:58.639908 | orchestrator | Saturday 11 April 2026 00:41:54 +0000 (0:00:00.153) 0:00:09.093 ******** 2026-04-11 00:41:58.639922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e341cbc8-c5ec-51c3-80df-3957b9d9208d'}})  2026-04-11 00:41:58.639948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '11d3af0d-f4a7-5343-8eab-aab153531328'}})  2026-04-11 00:41:58.640002 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640015 | orchestrator | 2026-04-11 00:41:58.640026 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-11 00:41:58.640037 | orchestrator | Saturday 11 April 2026 00:41:54 +0000 (0:00:00.125) 0:00:09.218 ******** 2026-04-11 00:41:58.640047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e341cbc8-c5ec-51c3-80df-3957b9d9208d'}})  2026-04-11 00:41:58.640059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '11d3af0d-f4a7-5343-8eab-aab153531328'}})  2026-04-11 00:41:58.640070 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640080 | orchestrator | 2026-04-11 00:41:58.640091 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-11 00:41:58.640102 | orchestrator | Saturday 11 April 2026 00:41:54 +0000 (0:00:00.128) 0:00:09.346 ******** 2026-04-11 00:41:58.640112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e341cbc8-c5ec-51c3-80df-3957b9d9208d'}})  2026-04-11 00:41:58.640140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '11d3af0d-f4a7-5343-8eab-aab153531328'}})  2026-04-11 00:41:58.640152 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640163 | orchestrator | 2026-04-11 00:41:58.640174 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-11 00:41:58.640184 | orchestrator | Saturday 11 April 2026 00:41:54 +0000 (0:00:00.272) 0:00:09.618 ******** 2026-04-11 00:41:58.640195 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:41:58.640206 | orchestrator | 2026-04-11 00:41:58.640217 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-11 00:41:58.640227 | orchestrator | Saturday 11 April 2026 00:41:54 +0000 (0:00:00.130) 0:00:09.750 ******** 2026-04-11 00:41:58.640238 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:41:58.640249 | orchestrator | 2026-04-11 00:41:58.640268 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-11 00:41:58.640279 | orchestrator | Saturday 11 April 2026 00:41:54 +0000 (0:00:00.132) 0:00:09.882 ******** 2026-04-11 00:41:58.640289 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640300 | orchestrator | 2026-04-11 00:41:58.640311 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-11 00:41:58.640322 | orchestrator | Saturday 11 April 2026 00:41:54 +0000 (0:00:00.129) 0:00:10.012 ******** 2026-04-11 00:41:58.640333 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640344 | orchestrator | 2026-04-11 00:41:58.640355 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-11 00:41:58.640365 | orchestrator | Saturday 11 April 2026 00:41:55 +0000 (0:00:00.123) 0:00:10.135 ******** 2026-04-11 00:41:58.640385 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640396 | orchestrator | 2026-04-11 00:41:58.640407 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-11 00:41:58.640418 | orchestrator | Saturday 11 April 2026 00:41:55 +0000 (0:00:00.136) 0:00:10.272 ******** 2026-04-11 00:41:58.640429 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 00:41:58.640440 | orchestrator |  "ceph_osd_devices": { 2026-04-11 00:41:58.640452 | orchestrator |  "sdb": { 2026-04-11 00:41:58.640463 | orchestrator |  "osd_lvm_uuid": "e341cbc8-c5ec-51c3-80df-3957b9d9208d" 2026-04-11 00:41:58.640475 | orchestrator |  }, 2026-04-11 00:41:58.640486 | orchestrator |  "sdc": { 2026-04-11 00:41:58.640497 | orchestrator |  "osd_lvm_uuid": "11d3af0d-f4a7-5343-8eab-aab153531328" 2026-04-11 00:41:58.640508 | orchestrator |  } 2026-04-11 00:41:58.640518 | orchestrator |  } 2026-04-11 00:41:58.640529 | orchestrator | } 2026-04-11 00:41:58.640540 | orchestrator | 2026-04-11 00:41:58.640551 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-11 00:41:58.640562 | orchestrator | Saturday 11 April 2026 00:41:55 +0000 (0:00:00.140) 0:00:10.412 ******** 2026-04-11 00:41:58.640572 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640583 | orchestrator | 2026-04-11 00:41:58.640594 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-11 00:41:58.640604 | orchestrator | Saturday 11 April 2026 00:41:55 +0000 (0:00:00.133) 0:00:10.545 ******** 2026-04-11 00:41:58.640615 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640625 | orchestrator | 2026-04-11 00:41:58.640636 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-11 00:41:58.640647 | orchestrator | Saturday 11 April 2026 00:41:55 +0000 (0:00:00.136) 0:00:10.681 ******** 2026-04-11 00:41:58.640658 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:41:58.640668 | orchestrator | 2026-04-11 00:41:58.640679 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-11 00:41:58.640690 | orchestrator | Saturday 11 April 2026 00:41:55 +0000 (0:00:00.155) 0:00:10.837 ******** 2026-04-11 00:41:58.640700 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 00:41:58.640712 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-11 00:41:58.640722 | orchestrator |  "ceph_osd_devices": { 2026-04-11 00:41:58.640733 | orchestrator |  "sdb": { 2026-04-11 00:41:58.640744 | orchestrator |  "osd_lvm_uuid": "e341cbc8-c5ec-51c3-80df-3957b9d9208d" 2026-04-11 00:41:58.640755 | orchestrator |  }, 2026-04-11 00:41:58.640766 | orchestrator |  "sdc": { 2026-04-11 00:41:58.640777 | orchestrator |  "osd_lvm_uuid": "11d3af0d-f4a7-5343-8eab-aab153531328" 2026-04-11 00:41:58.640788 | orchestrator |  } 2026-04-11 00:41:58.640799 | orchestrator |  }, 2026-04-11 00:41:58.640810 | orchestrator |  "lvm_volumes": [ 2026-04-11 00:41:58.640821 | orchestrator |  { 2026-04-11 00:41:58.640832 | orchestrator |  "data": "osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d", 2026-04-11 00:41:58.640843 | orchestrator |  "data_vg": "ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d" 2026-04-11 00:41:58.640861 | orchestrator |  }, 2026-04-11 00:41:58.640871 | orchestrator |  { 2026-04-11 00:41:58.640882 | orchestrator |  "data": "osd-block-11d3af0d-f4a7-5343-8eab-aab153531328", 2026-04-11 00:41:58.640893 | orchestrator |  "data_vg": "ceph-11d3af0d-f4a7-5343-8eab-aab153531328" 2026-04-11 00:41:58.640904 | orchestrator |  } 2026-04-11 00:41:58.640915 | orchestrator |  ] 2026-04-11 00:41:58.640925 | orchestrator |  } 2026-04-11 00:41:58.640936 | orchestrator | } 2026-04-11 00:41:58.640947 | orchestrator | 2026-04-11 00:41:58.640979 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-11 00:41:58.640990 | orchestrator | Saturday 11 April 2026 00:41:56 +0000 (0:00:00.209) 0:00:11.046 ******** 2026-04-11 00:41:58.641001 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 00:41:58.641012 | orchestrator | 2026-04-11 00:41:58.641023 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-11 00:41:58.641033 | orchestrator | 2026-04-11 00:41:58.641044 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 00:41:58.641055 | orchestrator | Saturday 11 April 2026 00:41:58 +0000 (0:00:02.128) 0:00:13.175 ******** 2026-04-11 00:41:58.641065 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-11 00:41:58.641076 | orchestrator | 2026-04-11 00:41:58.641087 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 00:41:58.641097 | orchestrator | Saturday 11 April 2026 00:41:58 +0000 (0:00:00.241) 0:00:13.416 ******** 2026-04-11 00:41:58.641108 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:41:58.641119 | orchestrator | 2026-04-11 00:41:58.641136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.657324 | orchestrator | Saturday 11 April 2026 00:41:58 +0000 (0:00:00.241) 0:00:13.658 ******** 2026-04-11 00:42:05.657458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-11 00:42:05.657484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-11 00:42:05.657502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-11 00:42:05.657519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-11 00:42:05.657536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-11 00:42:05.657553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-11 00:42:05.657569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-11 00:42:05.657587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-11 00:42:05.657609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-11 00:42:05.657626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-11 00:42:05.657644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-11 00:42:05.657660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-11 00:42:05.657679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-11 00:42:05.657696 | orchestrator | 2026-04-11 00:42:05.657713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.657727 | orchestrator | Saturday 11 April 2026 00:41:59 +0000 (0:00:00.388) 0:00:14.046 ******** 2026-04-11 00:42:05.657737 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.657748 | orchestrator | 2026-04-11 00:42:05.657758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.657768 | orchestrator | Saturday 11 April 2026 00:41:59 +0000 (0:00:00.171) 0:00:14.217 ******** 2026-04-11 00:42:05.657778 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.657810 | orchestrator | 2026-04-11 00:42:05.657821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.657831 | orchestrator | Saturday 11 April 2026 00:41:59 +0000 (0:00:00.181) 0:00:14.399 ******** 2026-04-11 00:42:05.657840 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.657850 | orchestrator | 2026-04-11 00:42:05.657859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.657871 | orchestrator | Saturday 11 April 2026 00:41:59 +0000 (0:00:00.192) 0:00:14.592 ******** 2026-04-11 00:42:05.657882 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.657894 | orchestrator | 2026-04-11 00:42:05.657905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.657917 | orchestrator | Saturday 11 April 2026 00:41:59 +0000 (0:00:00.197) 0:00:14.789 ******** 2026-04-11 00:42:05.657928 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.657939 | orchestrator | 2026-04-11 00:42:05.658006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.658118 | orchestrator | Saturday 11 April 2026 00:41:59 +0000 (0:00:00.181) 0:00:14.971 ******** 2026-04-11 00:42:05.658140 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.658158 | orchestrator | 2026-04-11 00:42:05.658176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.658194 | orchestrator | Saturday 11 April 2026 00:42:00 +0000 (0:00:00.566) 0:00:15.537 ******** 2026-04-11 00:42:05.658211 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.658230 | orchestrator | 2026-04-11 00:42:05.658247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.658265 | orchestrator | Saturday 11 April 2026 00:42:00 +0000 (0:00:00.186) 0:00:15.724 ******** 2026-04-11 00:42:05.658283 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.658300 | orchestrator | 2026-04-11 00:42:05.658319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.658336 | orchestrator | Saturday 11 April 2026 00:42:00 +0000 (0:00:00.193) 0:00:15.917 ******** 2026-04-11 00:42:05.658353 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22) 2026-04-11 00:42:05.658370 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22) 2026-04-11 00:42:05.658387 | orchestrator | 2026-04-11 00:42:05.658403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.658420 | orchestrator | Saturday 11 April 2026 00:42:01 +0000 (0:00:00.421) 0:00:16.339 ******** 2026-04-11 00:42:05.658437 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5) 2026-04-11 00:42:05.658453 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5) 2026-04-11 00:42:05.658469 | orchestrator | 2026-04-11 00:42:05.658485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.658511 | orchestrator | Saturday 11 April 2026 00:42:01 +0000 (0:00:00.396) 0:00:16.735 ******** 2026-04-11 00:42:05.658528 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86) 2026-04-11 00:42:05.658543 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86) 2026-04-11 00:42:05.658559 | orchestrator | 2026-04-11 00:42:05.658574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.658613 | orchestrator | Saturday 11 April 2026 00:42:02 +0000 (0:00:00.420) 0:00:17.156 ******** 2026-04-11 00:42:05.658629 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b) 2026-04-11 00:42:05.658644 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b) 2026-04-11 00:42:05.658660 | orchestrator | 2026-04-11 00:42:05.658677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:05.658709 | orchestrator | Saturday 11 April 2026 00:42:02 +0000 (0:00:00.410) 0:00:17.566 ******** 2026-04-11 00:42:05.658725 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 00:42:05.658742 | orchestrator | 2026-04-11 00:42:05.658758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.658774 | orchestrator | Saturday 11 April 2026 00:42:02 +0000 (0:00:00.316) 0:00:17.882 ******** 2026-04-11 00:42:05.658789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-11 00:42:05.658806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-11 00:42:05.658823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-11 00:42:05.658840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-11 00:42:05.658857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-11 00:42:05.658873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-11 00:42:05.658888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-11 00:42:05.658898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-11 00:42:05.658908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-11 00:42:05.658917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-11 00:42:05.658927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-11 00:42:05.658936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-11 00:42:05.658945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-11 00:42:05.658984 | orchestrator | 2026-04-11 00:42:05.658994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659004 | orchestrator | Saturday 11 April 2026 00:42:03 +0000 (0:00:00.360) 0:00:18.242 ******** 2026-04-11 00:42:05.659014 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.659023 | orchestrator | 2026-04-11 00:42:05.659033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659043 | orchestrator | Saturday 11 April 2026 00:42:03 +0000 (0:00:00.192) 0:00:18.435 ******** 2026-04-11 00:42:05.659052 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.659062 | orchestrator | 2026-04-11 00:42:05.659072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659081 | orchestrator | Saturday 11 April 2026 00:42:03 +0000 (0:00:00.461) 0:00:18.897 ******** 2026-04-11 00:42:05.659091 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.659101 | orchestrator | 2026-04-11 00:42:05.659110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659120 | orchestrator | Saturday 11 April 2026 00:42:04 +0000 (0:00:00.168) 0:00:19.066 ******** 2026-04-11 00:42:05.659130 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.659139 | orchestrator | 2026-04-11 00:42:05.659149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659158 | orchestrator | Saturday 11 April 2026 00:42:04 +0000 (0:00:00.180) 0:00:19.246 ******** 2026-04-11 00:42:05.659168 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.659177 | orchestrator | 2026-04-11 00:42:05.659187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659197 | orchestrator | Saturday 11 April 2026 00:42:04 +0000 (0:00:00.178) 0:00:19.425 ******** 2026-04-11 00:42:05.659206 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.659216 | orchestrator | 2026-04-11 00:42:05.659225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659243 | orchestrator | Saturday 11 April 2026 00:42:04 +0000 (0:00:00.177) 0:00:19.603 ******** 2026-04-11 00:42:05.659252 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.659262 | orchestrator | 2026-04-11 00:42:05.659272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659281 | orchestrator | Saturday 11 April 2026 00:42:04 +0000 (0:00:00.170) 0:00:19.774 ******** 2026-04-11 00:42:05.659291 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:05.659300 | orchestrator | 2026-04-11 00:42:05.659310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659320 | orchestrator | Saturday 11 April 2026 00:42:04 +0000 (0:00:00.201) 0:00:19.976 ******** 2026-04-11 00:42:05.659336 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-11 00:42:05.659347 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-11 00:42:05.659357 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-11 00:42:05.659366 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-11 00:42:05.659376 | orchestrator | 2026-04-11 00:42:05.659386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:05.659396 | orchestrator | Saturday 11 April 2026 00:42:05 +0000 (0:00:00.586) 0:00:20.562 ******** 2026-04-11 00:42:05.659405 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.163742 | orchestrator | 2026-04-11 00:42:11.163857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:11.163871 | orchestrator | Saturday 11 April 2026 00:42:05 +0000 (0:00:00.190) 0:00:20.752 ******** 2026-04-11 00:42:11.163880 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.163888 | orchestrator | 2026-04-11 00:42:11.163896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:11.163903 | orchestrator | Saturday 11 April 2026 00:42:05 +0000 (0:00:00.164) 0:00:20.917 ******** 2026-04-11 00:42:11.163911 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.163918 | orchestrator | 2026-04-11 00:42:11.163926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:11.163933 | orchestrator | Saturday 11 April 2026 00:42:06 +0000 (0:00:00.179) 0:00:21.097 ******** 2026-04-11 00:42:11.163940 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.163990 | orchestrator | 2026-04-11 00:42:11.163999 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-11 00:42:11.164006 | orchestrator | Saturday 11 April 2026 00:42:06 +0000 (0:00:00.174) 0:00:21.271 ******** 2026-04-11 00:42:11.164014 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-11 00:42:11.164021 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-11 00:42:11.164028 | orchestrator | 2026-04-11 00:42:11.164036 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-11 00:42:11.164043 | orchestrator | Saturday 11 April 2026 00:42:06 +0000 (0:00:00.260) 0:00:21.532 ******** 2026-04-11 00:42:11.164051 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164058 | orchestrator | 2026-04-11 00:42:11.164065 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-11 00:42:11.164072 | orchestrator | Saturday 11 April 2026 00:42:06 +0000 (0:00:00.119) 0:00:21.651 ******** 2026-04-11 00:42:11.164080 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164087 | orchestrator | 2026-04-11 00:42:11.164094 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-11 00:42:11.164101 | orchestrator | Saturday 11 April 2026 00:42:06 +0000 (0:00:00.116) 0:00:21.768 ******** 2026-04-11 00:42:11.164109 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164116 | orchestrator | 2026-04-11 00:42:11.164123 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-11 00:42:11.164130 | orchestrator | Saturday 11 April 2026 00:42:06 +0000 (0:00:00.105) 0:00:21.873 ******** 2026-04-11 00:42:11.164138 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:42:11.164167 | orchestrator | 2026-04-11 00:42:11.164175 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-11 00:42:11.164182 | orchestrator | Saturday 11 April 2026 00:42:06 +0000 (0:00:00.128) 0:00:22.002 ******** 2026-04-11 00:42:11.164190 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}}) 2026-04-11 00:42:11.164197 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '296a8e73-f310-5078-85ef-ed795f213448'}}) 2026-04-11 00:42:11.164204 | orchestrator | 2026-04-11 00:42:11.164212 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-11 00:42:11.164219 | orchestrator | Saturday 11 April 2026 00:42:07 +0000 (0:00:00.134) 0:00:22.137 ******** 2026-04-11 00:42:11.164226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}})  2026-04-11 00:42:11.164234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '296a8e73-f310-5078-85ef-ed795f213448'}})  2026-04-11 00:42:11.164241 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164249 | orchestrator | 2026-04-11 00:42:11.164256 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-11 00:42:11.164265 | orchestrator | Saturday 11 April 2026 00:42:07 +0000 (0:00:00.132) 0:00:22.269 ******** 2026-04-11 00:42:11.164273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}})  2026-04-11 00:42:11.164282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '296a8e73-f310-5078-85ef-ed795f213448'}})  2026-04-11 00:42:11.164290 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164299 | orchestrator | 2026-04-11 00:42:11.164307 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-11 00:42:11.164315 | orchestrator | Saturday 11 April 2026 00:42:07 +0000 (0:00:00.137) 0:00:22.407 ******** 2026-04-11 00:42:11.164323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}})  2026-04-11 00:42:11.164331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '296a8e73-f310-5078-85ef-ed795f213448'}})  2026-04-11 00:42:11.164340 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164348 | orchestrator | 2026-04-11 00:42:11.164369 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-11 00:42:11.164377 | orchestrator | Saturday 11 April 2026 00:42:07 +0000 (0:00:00.126) 0:00:22.533 ******** 2026-04-11 00:42:11.164385 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:42:11.164394 | orchestrator | 2026-04-11 00:42:11.164402 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-11 00:42:11.164410 | orchestrator | Saturday 11 April 2026 00:42:07 +0000 (0:00:00.114) 0:00:22.647 ******** 2026-04-11 00:42:11.164418 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:42:11.164426 | orchestrator | 2026-04-11 00:42:11.164433 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-11 00:42:11.164442 | orchestrator | Saturday 11 April 2026 00:42:07 +0000 (0:00:00.132) 0:00:22.780 ******** 2026-04-11 00:42:11.164465 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164473 | orchestrator | 2026-04-11 00:42:11.164495 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-11 00:42:11.164503 | orchestrator | Saturday 11 April 2026 00:42:07 +0000 (0:00:00.119) 0:00:22.900 ******** 2026-04-11 00:42:11.164510 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164518 | orchestrator | 2026-04-11 00:42:11.164525 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-11 00:42:11.164532 | orchestrator | Saturday 11 April 2026 00:42:08 +0000 (0:00:00.262) 0:00:23.162 ******** 2026-04-11 00:42:11.164539 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164546 | orchestrator | 2026-04-11 00:42:11.164559 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-11 00:42:11.164567 | orchestrator | Saturday 11 April 2026 00:42:08 +0000 (0:00:00.119) 0:00:23.282 ******** 2026-04-11 00:42:11.164574 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 00:42:11.164581 | orchestrator |  "ceph_osd_devices": { 2026-04-11 00:42:11.164589 | orchestrator |  "sdb": { 2026-04-11 00:42:11.164596 | orchestrator |  "osd_lvm_uuid": "55dbb7e6-b9f3-5799-9073-e4231f2b06a1" 2026-04-11 00:42:11.164604 | orchestrator |  }, 2026-04-11 00:42:11.164612 | orchestrator |  "sdc": { 2026-04-11 00:42:11.164619 | orchestrator |  "osd_lvm_uuid": "296a8e73-f310-5078-85ef-ed795f213448" 2026-04-11 00:42:11.164626 | orchestrator |  } 2026-04-11 00:42:11.164633 | orchestrator |  } 2026-04-11 00:42:11.164641 | orchestrator | } 2026-04-11 00:42:11.164648 | orchestrator | 2026-04-11 00:42:11.164656 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-11 00:42:11.164663 | orchestrator | Saturday 11 April 2026 00:42:08 +0000 (0:00:00.153) 0:00:23.435 ******** 2026-04-11 00:42:11.164670 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164677 | orchestrator | 2026-04-11 00:42:11.164684 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-11 00:42:11.164691 | orchestrator | Saturday 11 April 2026 00:42:08 +0000 (0:00:00.124) 0:00:23.560 ******** 2026-04-11 00:42:11.164698 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164705 | orchestrator | 2026-04-11 00:42:11.164713 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-11 00:42:11.164720 | orchestrator | Saturday 11 April 2026 00:42:08 +0000 (0:00:00.122) 0:00:23.682 ******** 2026-04-11 00:42:11.164727 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:42:11.164734 | orchestrator | 2026-04-11 00:42:11.164741 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-11 00:42:11.164748 | orchestrator | Saturday 11 April 2026 00:42:08 +0000 (0:00:00.114) 0:00:23.796 ******** 2026-04-11 00:42:11.164755 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 00:42:11.164762 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-11 00:42:11.164770 | orchestrator |  "ceph_osd_devices": { 2026-04-11 00:42:11.164777 | orchestrator |  "sdb": { 2026-04-11 00:42:11.164784 | orchestrator |  "osd_lvm_uuid": "55dbb7e6-b9f3-5799-9073-e4231f2b06a1" 2026-04-11 00:42:11.164792 | orchestrator |  }, 2026-04-11 00:42:11.164799 | orchestrator |  "sdc": { 2026-04-11 00:42:11.164806 | orchestrator |  "osd_lvm_uuid": "296a8e73-f310-5078-85ef-ed795f213448" 2026-04-11 00:42:11.164813 | orchestrator |  } 2026-04-11 00:42:11.164820 | orchestrator |  }, 2026-04-11 00:42:11.164827 | orchestrator |  "lvm_volumes": [ 2026-04-11 00:42:11.164835 | orchestrator |  { 2026-04-11 00:42:11.164842 | orchestrator |  "data": "osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1", 2026-04-11 00:42:11.164849 | orchestrator |  "data_vg": "ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1" 2026-04-11 00:42:11.164857 | orchestrator |  }, 2026-04-11 00:42:11.164864 | orchestrator |  { 2026-04-11 00:42:11.164871 | orchestrator |  "data": "osd-block-296a8e73-f310-5078-85ef-ed795f213448", 2026-04-11 00:42:11.164878 | orchestrator |  "data_vg": "ceph-296a8e73-f310-5078-85ef-ed795f213448" 2026-04-11 00:42:11.164885 | orchestrator |  } 2026-04-11 00:42:11.164892 | orchestrator |  ] 2026-04-11 00:42:11.164900 | orchestrator |  } 2026-04-11 00:42:11.164907 | orchestrator | } 2026-04-11 00:42:11.164914 | orchestrator | 2026-04-11 00:42:11.164921 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-11 00:42:11.164929 | orchestrator | Saturday 11 April 2026 00:42:08 +0000 (0:00:00.194) 0:00:23.990 ******** 2026-04-11 00:42:11.164936 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-11 00:42:11.164943 | orchestrator | 2026-04-11 00:42:11.164967 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-11 00:42:11.164980 | orchestrator | 2026-04-11 00:42:11.164988 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 00:42:11.164995 | orchestrator | Saturday 11 April 2026 00:42:10 +0000 (0:00:01.036) 0:00:25.027 ******** 2026-04-11 00:42:11.165002 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-11 00:42:11.165009 | orchestrator | 2026-04-11 00:42:11.165016 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 00:42:11.165023 | orchestrator | Saturday 11 April 2026 00:42:10 +0000 (0:00:00.369) 0:00:25.397 ******** 2026-04-11 00:42:11.165030 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:42:11.165037 | orchestrator | 2026-04-11 00:42:11.165045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:11.165052 | orchestrator | Saturday 11 April 2026 00:42:10 +0000 (0:00:00.518) 0:00:25.915 ******** 2026-04-11 00:42:11.165059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-11 00:42:11.165066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-11 00:42:11.165073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-11 00:42:11.165080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-11 00:42:11.165087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-11 00:42:11.165099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-11 00:42:18.669606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-11 00:42:18.669712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-11 00:42:18.669724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-11 00:42:18.669734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-11 00:42:18.669743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-11 00:42:18.669752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-11 00:42:18.669761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-11 00:42:18.669770 | orchestrator | 2026-04-11 00:42:18.669780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.669790 | orchestrator | Saturday 11 April 2026 00:42:11 +0000 (0:00:00.345) 0:00:26.261 ******** 2026-04-11 00:42:18.669799 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.669809 | orchestrator | 2026-04-11 00:42:18.669818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.669826 | orchestrator | Saturday 11 April 2026 00:42:11 +0000 (0:00:00.217) 0:00:26.479 ******** 2026-04-11 00:42:18.669835 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.669844 | orchestrator | 2026-04-11 00:42:18.669869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.669879 | orchestrator | Saturday 11 April 2026 00:42:11 +0000 (0:00:00.185) 0:00:26.664 ******** 2026-04-11 00:42:18.669888 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.669896 | orchestrator | 2026-04-11 00:42:18.669905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.669914 | orchestrator | Saturday 11 April 2026 00:42:11 +0000 (0:00:00.207) 0:00:26.872 ******** 2026-04-11 00:42:18.669923 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.669931 | orchestrator | 2026-04-11 00:42:18.669977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.669992 | orchestrator | Saturday 11 April 2026 00:42:12 +0000 (0:00:00.180) 0:00:27.052 ******** 2026-04-11 00:42:18.670001 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670080 | orchestrator | 2026-04-11 00:42:18.670091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.670100 | orchestrator | Saturday 11 April 2026 00:42:12 +0000 (0:00:00.169) 0:00:27.222 ******** 2026-04-11 00:42:18.670109 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670117 | orchestrator | 2026-04-11 00:42:18.670126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.670135 | orchestrator | Saturday 11 April 2026 00:42:12 +0000 (0:00:00.180) 0:00:27.402 ******** 2026-04-11 00:42:18.670143 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670152 | orchestrator | 2026-04-11 00:42:18.670160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.670170 | orchestrator | Saturday 11 April 2026 00:42:12 +0000 (0:00:00.178) 0:00:27.580 ******** 2026-04-11 00:42:18.670179 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670187 | orchestrator | 2026-04-11 00:42:18.670196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.670204 | orchestrator | Saturday 11 April 2026 00:42:12 +0000 (0:00:00.178) 0:00:27.758 ******** 2026-04-11 00:42:18.670213 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c) 2026-04-11 00:42:18.670223 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c) 2026-04-11 00:42:18.670232 | orchestrator | 2026-04-11 00:42:18.670240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.670249 | orchestrator | Saturday 11 April 2026 00:42:13 +0000 (0:00:00.518) 0:00:28.276 ******** 2026-04-11 00:42:18.670258 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9) 2026-04-11 00:42:18.670266 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9) 2026-04-11 00:42:18.670275 | orchestrator | 2026-04-11 00:42:18.670284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.670292 | orchestrator | Saturday 11 April 2026 00:42:13 +0000 (0:00:00.629) 0:00:28.906 ******** 2026-04-11 00:42:18.670300 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995) 2026-04-11 00:42:18.670309 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995) 2026-04-11 00:42:18.670318 | orchestrator | 2026-04-11 00:42:18.670326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.670335 | orchestrator | Saturday 11 April 2026 00:42:14 +0000 (0:00:00.390) 0:00:29.296 ******** 2026-04-11 00:42:18.670343 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514) 2026-04-11 00:42:18.670352 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514) 2026-04-11 00:42:18.670361 | orchestrator | 2026-04-11 00:42:18.670369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:42:18.670378 | orchestrator | Saturday 11 April 2026 00:42:14 +0000 (0:00:00.398) 0:00:29.695 ******** 2026-04-11 00:42:18.670387 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 00:42:18.670395 | orchestrator | 2026-04-11 00:42:18.670404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670429 | orchestrator | Saturday 11 April 2026 00:42:14 +0000 (0:00:00.306) 0:00:30.002 ******** 2026-04-11 00:42:18.670438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-11 00:42:18.670447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-11 00:42:18.670456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-11 00:42:18.670465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-11 00:42:18.670481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-11 00:42:18.670489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-11 00:42:18.670498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-11 00:42:18.670506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-11 00:42:18.670515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-11 00:42:18.670524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-11 00:42:18.670532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-11 00:42:18.670541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-11 00:42:18.670549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-11 00:42:18.670558 | orchestrator | 2026-04-11 00:42:18.670567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670575 | orchestrator | Saturday 11 April 2026 00:42:15 +0000 (0:00:00.343) 0:00:30.346 ******** 2026-04-11 00:42:18.670584 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670592 | orchestrator | 2026-04-11 00:42:18.670601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670610 | orchestrator | Saturday 11 April 2026 00:42:15 +0000 (0:00:00.179) 0:00:30.525 ******** 2026-04-11 00:42:18.670618 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670627 | orchestrator | 2026-04-11 00:42:18.670635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670644 | orchestrator | Saturday 11 April 2026 00:42:15 +0000 (0:00:00.176) 0:00:30.702 ******** 2026-04-11 00:42:18.670653 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670661 | orchestrator | 2026-04-11 00:42:18.670670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670679 | orchestrator | Saturday 11 April 2026 00:42:15 +0000 (0:00:00.180) 0:00:30.882 ******** 2026-04-11 00:42:18.670687 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670696 | orchestrator | 2026-04-11 00:42:18.670705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670713 | orchestrator | Saturday 11 April 2026 00:42:16 +0000 (0:00:00.182) 0:00:31.065 ******** 2026-04-11 00:42:18.670722 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670730 | orchestrator | 2026-04-11 00:42:18.670739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670748 | orchestrator | Saturday 11 April 2026 00:42:16 +0000 (0:00:00.193) 0:00:31.258 ******** 2026-04-11 00:42:18.670757 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670773 | orchestrator | 2026-04-11 00:42:18.670789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670809 | orchestrator | Saturday 11 April 2026 00:42:16 +0000 (0:00:00.651) 0:00:31.909 ******** 2026-04-11 00:42:18.670827 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670842 | orchestrator | 2026-04-11 00:42:18.670865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670881 | orchestrator | Saturday 11 April 2026 00:42:17 +0000 (0:00:00.240) 0:00:32.150 ******** 2026-04-11 00:42:18.670896 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.670911 | orchestrator | 2026-04-11 00:42:18.670926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.670982 | orchestrator | Saturday 11 April 2026 00:42:17 +0000 (0:00:00.185) 0:00:32.336 ******** 2026-04-11 00:42:18.670998 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-11 00:42:18.671012 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-11 00:42:18.671036 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-11 00:42:18.671049 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-11 00:42:18.671065 | orchestrator | 2026-04-11 00:42:18.671079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.671093 | orchestrator | Saturday 11 April 2026 00:42:17 +0000 (0:00:00.632) 0:00:32.968 ******** 2026-04-11 00:42:18.671102 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.671110 | orchestrator | 2026-04-11 00:42:18.671119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.671128 | orchestrator | Saturday 11 April 2026 00:42:18 +0000 (0:00:00.182) 0:00:33.151 ******** 2026-04-11 00:42:18.671136 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.671145 | orchestrator | 2026-04-11 00:42:18.671153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.671162 | orchestrator | Saturday 11 April 2026 00:42:18 +0000 (0:00:00.184) 0:00:33.335 ******** 2026-04-11 00:42:18.671170 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.671179 | orchestrator | 2026-04-11 00:42:18.671187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:42:18.671196 | orchestrator | Saturday 11 April 2026 00:42:18 +0000 (0:00:00.174) 0:00:33.510 ******** 2026-04-11 00:42:18.671205 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:18.671213 | orchestrator | 2026-04-11 00:42:18.671230 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-11 00:42:22.214439 | orchestrator | Saturday 11 April 2026 00:42:18 +0000 (0:00:00.179) 0:00:33.689 ******** 2026-04-11 00:42:22.214531 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-11 00:42:22.214541 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-11 00:42:22.214548 | orchestrator | 2026-04-11 00:42:22.214555 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-11 00:42:22.214562 | orchestrator | Saturday 11 April 2026 00:42:18 +0000 (0:00:00.137) 0:00:33.826 ******** 2026-04-11 00:42:22.214568 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214575 | orchestrator | 2026-04-11 00:42:22.214581 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-11 00:42:22.214587 | orchestrator | Saturday 11 April 2026 00:42:18 +0000 (0:00:00.113) 0:00:33.940 ******** 2026-04-11 00:42:22.214593 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214599 | orchestrator | 2026-04-11 00:42:22.214606 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-11 00:42:22.214612 | orchestrator | Saturday 11 April 2026 00:42:19 +0000 (0:00:00.116) 0:00:34.056 ******** 2026-04-11 00:42:22.214618 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214624 | orchestrator | 2026-04-11 00:42:22.214630 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-11 00:42:22.214637 | orchestrator | Saturday 11 April 2026 00:42:19 +0000 (0:00:00.126) 0:00:34.182 ******** 2026-04-11 00:42:22.214643 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:42:22.214650 | orchestrator | 2026-04-11 00:42:22.214656 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-11 00:42:22.214661 | orchestrator | Saturday 11 April 2026 00:42:19 +0000 (0:00:00.249) 0:00:34.432 ******** 2026-04-11 00:42:22.214668 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}}) 2026-04-11 00:42:22.214675 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}}) 2026-04-11 00:42:22.214680 | orchestrator | 2026-04-11 00:42:22.214686 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-11 00:42:22.214692 | orchestrator | Saturday 11 April 2026 00:42:19 +0000 (0:00:00.128) 0:00:34.561 ******** 2026-04-11 00:42:22.214699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}})  2026-04-11 00:42:22.214731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}})  2026-04-11 00:42:22.214737 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214743 | orchestrator | 2026-04-11 00:42:22.214749 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-11 00:42:22.214754 | orchestrator | Saturday 11 April 2026 00:42:19 +0000 (0:00:00.146) 0:00:34.707 ******** 2026-04-11 00:42:22.214760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}})  2026-04-11 00:42:22.214766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}})  2026-04-11 00:42:22.214772 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214778 | orchestrator | 2026-04-11 00:42:22.214784 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-11 00:42:22.214790 | orchestrator | Saturday 11 April 2026 00:42:19 +0000 (0:00:00.140) 0:00:34.848 ******** 2026-04-11 00:42:22.214796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}})  2026-04-11 00:42:22.214802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}})  2026-04-11 00:42:22.214807 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214813 | orchestrator | 2026-04-11 00:42:22.214819 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-11 00:42:22.214825 | orchestrator | Saturday 11 April 2026 00:42:19 +0000 (0:00:00.134) 0:00:34.983 ******** 2026-04-11 00:42:22.214831 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:42:22.214836 | orchestrator | 2026-04-11 00:42:22.214842 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-11 00:42:22.214848 | orchestrator | Saturday 11 April 2026 00:42:20 +0000 (0:00:00.128) 0:00:35.111 ******** 2026-04-11 00:42:22.214854 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:42:22.214859 | orchestrator | 2026-04-11 00:42:22.214865 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-11 00:42:22.214871 | orchestrator | Saturday 11 April 2026 00:42:20 +0000 (0:00:00.129) 0:00:35.241 ******** 2026-04-11 00:42:22.214877 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214883 | orchestrator | 2026-04-11 00:42:22.214888 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-11 00:42:22.214894 | orchestrator | Saturday 11 April 2026 00:42:20 +0000 (0:00:00.111) 0:00:35.352 ******** 2026-04-11 00:42:22.214900 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214906 | orchestrator | 2026-04-11 00:42:22.214911 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-11 00:42:22.214917 | orchestrator | Saturday 11 April 2026 00:42:20 +0000 (0:00:00.110) 0:00:35.463 ******** 2026-04-11 00:42:22.214923 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.214929 | orchestrator | 2026-04-11 00:42:22.214960 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-11 00:42:22.214967 | orchestrator | Saturday 11 April 2026 00:42:20 +0000 (0:00:00.122) 0:00:35.586 ******** 2026-04-11 00:42:22.214973 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 00:42:22.214979 | orchestrator |  "ceph_osd_devices": { 2026-04-11 00:42:22.214985 | orchestrator |  "sdb": { 2026-04-11 00:42:22.215004 | orchestrator |  "osd_lvm_uuid": "b3fdd62a-f314-5cc9-917d-dd1b13b97b05" 2026-04-11 00:42:22.215012 | orchestrator |  }, 2026-04-11 00:42:22.215018 | orchestrator |  "sdc": { 2026-04-11 00:42:22.215024 | orchestrator |  "osd_lvm_uuid": "1d143ac2-4627-58f1-a5c6-72fac7fe3e62" 2026-04-11 00:42:22.215030 | orchestrator |  } 2026-04-11 00:42:22.215036 | orchestrator |  } 2026-04-11 00:42:22.215042 | orchestrator | } 2026-04-11 00:42:22.215048 | orchestrator | 2026-04-11 00:42:22.215054 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-11 00:42:22.215091 | orchestrator | Saturday 11 April 2026 00:42:20 +0000 (0:00:00.126) 0:00:35.712 ******** 2026-04-11 00:42:22.215097 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.215103 | orchestrator | 2026-04-11 00:42:22.215109 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-11 00:42:22.215115 | orchestrator | Saturday 11 April 2026 00:42:20 +0000 (0:00:00.104) 0:00:35.816 ******** 2026-04-11 00:42:22.215121 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.215126 | orchestrator | 2026-04-11 00:42:22.215132 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-11 00:42:22.215138 | orchestrator | Saturday 11 April 2026 00:42:21 +0000 (0:00:00.243) 0:00:36.060 ******** 2026-04-11 00:42:22.215143 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:42:22.215149 | orchestrator | 2026-04-11 00:42:22.215155 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-11 00:42:22.215174 | orchestrator | Saturday 11 April 2026 00:42:21 +0000 (0:00:00.117) 0:00:36.177 ******** 2026-04-11 00:42:22.215180 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 00:42:22.215186 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-11 00:42:22.215192 | orchestrator |  "ceph_osd_devices": { 2026-04-11 00:42:22.215198 | orchestrator |  "sdb": { 2026-04-11 00:42:22.215204 | orchestrator |  "osd_lvm_uuid": "b3fdd62a-f314-5cc9-917d-dd1b13b97b05" 2026-04-11 00:42:22.215210 | orchestrator |  }, 2026-04-11 00:42:22.215216 | orchestrator |  "sdc": { 2026-04-11 00:42:22.215221 | orchestrator |  "osd_lvm_uuid": "1d143ac2-4627-58f1-a5c6-72fac7fe3e62" 2026-04-11 00:42:22.215230 | orchestrator |  } 2026-04-11 00:42:22.215236 | orchestrator |  }, 2026-04-11 00:42:22.215242 | orchestrator |  "lvm_volumes": [ 2026-04-11 00:42:22.215248 | orchestrator |  { 2026-04-11 00:42:22.215254 | orchestrator |  "data": "osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05", 2026-04-11 00:42:22.215260 | orchestrator |  "data_vg": "ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05" 2026-04-11 00:42:22.215266 | orchestrator |  }, 2026-04-11 00:42:22.215272 | orchestrator |  { 2026-04-11 00:42:22.215280 | orchestrator |  "data": "osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62", 2026-04-11 00:42:22.215286 | orchestrator |  "data_vg": "ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62" 2026-04-11 00:42:22.215292 | orchestrator |  } 2026-04-11 00:42:22.215298 | orchestrator |  ] 2026-04-11 00:42:22.215304 | orchestrator |  } 2026-04-11 00:42:22.215310 | orchestrator | } 2026-04-11 00:42:22.215316 | orchestrator | 2026-04-11 00:42:22.215321 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-11 00:42:22.215327 | orchestrator | Saturday 11 April 2026 00:42:21 +0000 (0:00:00.184) 0:00:36.362 ******** 2026-04-11 00:42:22.215333 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-11 00:42:22.215339 | orchestrator | 2026-04-11 00:42:22.215344 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:42:22.215350 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 00:42:22.215357 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 00:42:22.215363 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 00:42:22.215368 | orchestrator | 2026-04-11 00:42:22.215374 | orchestrator | 2026-04-11 00:42:22.215380 | orchestrator | 2026-04-11 00:42:22.215386 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:42:22.215392 | orchestrator | Saturday 11 April 2026 00:42:22 +0000 (0:00:00.863) 0:00:37.225 ******** 2026-04-11 00:42:22.215397 | orchestrator | =============================================================================== 2026-04-11 00:42:22.215407 | orchestrator | Write configuration file ------------------------------------------------ 4.03s 2026-04-11 00:42:22.215413 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-04-11 00:42:22.215419 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-04-11 00:42:22.215424 | orchestrator | Get initial list of available block devices ----------------------------- 0.96s 2026-04-11 00:42:22.215430 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2026-04-11 00:42:22.215436 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-04-11 00:42:22.215441 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-04-11 00:42:22.215447 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-04-11 00:42:22.215453 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-04-11 00:42:22.215458 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-04-11 00:42:22.215464 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2026-04-11 00:42:22.215470 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-04-11 00:42:22.215476 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-04-11 00:42:22.215486 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.55s 2026-04-11 00:42:22.415588 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-04-11 00:42:22.415700 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-04-11 00:42:22.415715 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.53s 2026-04-11 00:42:22.415727 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-04-11 00:42:22.415738 | orchestrator | Print DB devices -------------------------------------------------------- 0.50s 2026-04-11 00:42:22.415749 | orchestrator | Set WAL devices config data --------------------------------------------- 0.50s 2026-04-11 00:42:43.925777 | orchestrator | 2026-04-11 00:42:43 | INFO  | Task 1943b02b-4e21-4192-ad6a-2cea0e7de26c (sync inventory) is running in background. Output coming soon. 2026-04-11 00:43:12.591736 | orchestrator | 2026-04-11 00:42:45 | INFO  | Starting group_vars file reorganization 2026-04-11 00:43:12.591845 | orchestrator | 2026-04-11 00:42:45 | INFO  | Moved 0 file(s) to their respective directories 2026-04-11 00:43:12.591855 | orchestrator | 2026-04-11 00:42:45 | INFO  | Group_vars file reorganization completed 2026-04-11 00:43:12.591860 | orchestrator | 2026-04-11 00:42:48 | INFO  | Starting variable preparation from inventory 2026-04-11 00:43:12.591864 | orchestrator | 2026-04-11 00:42:50 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-11 00:43:12.591870 | orchestrator | 2026-04-11 00:42:50 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-11 00:43:12.591875 | orchestrator | 2026-04-11 00:42:50 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-11 00:43:12.591879 | orchestrator | 2026-04-11 00:42:50 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-11 00:43:12.591884 | orchestrator | 2026-04-11 00:42:50 | INFO  | Variable preparation completed 2026-04-11 00:43:12.591926 | orchestrator | 2026-04-11 00:42:52 | INFO  | Starting inventory overwrite handling 2026-04-11 00:43:12.591932 | orchestrator | 2026-04-11 00:42:52 | INFO  | Handling group overwrites in 99-overwrite 2026-04-11 00:43:12.591936 | orchestrator | 2026-04-11 00:42:52 | INFO  | Removing group frr:children from 60-generic 2026-04-11 00:43:12.591941 | orchestrator | 2026-04-11 00:42:52 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-11 00:43:12.591964 | orchestrator | 2026-04-11 00:42:52 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-11 00:43:12.592003 | orchestrator | 2026-04-11 00:42:52 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-11 00:43:12.592009 | orchestrator | 2026-04-11 00:42:52 | INFO  | Handling group overwrites in 20-roles 2026-04-11 00:43:12.592013 | orchestrator | 2026-04-11 00:42:52 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-11 00:43:12.592017 | orchestrator | 2026-04-11 00:42:52 | INFO  | Removed 5 group(s) in total 2026-04-11 00:43:12.592022 | orchestrator | 2026-04-11 00:42:52 | INFO  | Inventory overwrite handling completed 2026-04-11 00:43:12.592027 | orchestrator | 2026-04-11 00:42:53 | INFO  | Starting merge of inventory files 2026-04-11 00:43:12.592031 | orchestrator | 2026-04-11 00:42:53 | INFO  | Inventory files merged successfully 2026-04-11 00:43:12.592036 | orchestrator | 2026-04-11 00:42:57 | INFO  | Generating minified hosts file 2026-04-11 00:43:12.592040 | orchestrator | 2026-04-11 00:42:59 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-11 00:43:12.592045 | orchestrator | 2026-04-11 00:42:59 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-11 00:43:12.592049 | orchestrator | 2026-04-11 00:43:00 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-11 00:43:12.592053 | orchestrator | 2026-04-11 00:43:11 | INFO  | Successfully wrote ClusterShell configuration 2026-04-11 00:43:12.592057 | orchestrator | [master 0e91029] 2026-04-11-00-43 2026-04-11 00:43:12.592063 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-11 00:43:12.592067 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-11 00:43:12.592071 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-11 00:43:12.592075 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-11 00:43:13.837720 | orchestrator | 2026-04-11 00:43:13 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-11 00:43:13.896721 | orchestrator | 2026-04-11 00:43:13 | INFO  | Task 270bc94e-00a5-4021-8d54-29a42fd6dda4 (ceph-create-lvm-devices) was prepared for execution. 2026-04-11 00:43:13.896878 | orchestrator | 2026-04-11 00:43:13 | INFO  | It takes a moment until task 270bc94e-00a5-4021-8d54-29a42fd6dda4 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-11 00:43:24.592131 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 00:43:24.592242 | orchestrator | 2.16.14 2026-04-11 00:43:24.592257 | orchestrator | 2026-04-11 00:43:24.592267 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-11 00:43:24.592277 | orchestrator | 2026-04-11 00:43:24.592285 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 00:43:24.592294 | orchestrator | Saturday 11 April 2026 00:43:17 +0000 (0:00:00.257) 0:00:00.257 ******** 2026-04-11 00:43:24.592302 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 00:43:24.592310 | orchestrator | 2026-04-11 00:43:24.592318 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 00:43:24.592326 | orchestrator | Saturday 11 April 2026 00:43:18 +0000 (0:00:00.207) 0:00:00.465 ******** 2026-04-11 00:43:24.592334 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:24.592342 | orchestrator | 2026-04-11 00:43:24.592351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592358 | orchestrator | Saturday 11 April 2026 00:43:18 +0000 (0:00:00.206) 0:00:00.671 ******** 2026-04-11 00:43:24.592366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-11 00:43:24.592395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-11 00:43:24.592403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-11 00:43:24.592411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-11 00:43:24.592419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-11 00:43:24.592427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-11 00:43:24.592443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-11 00:43:24.592457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-11 00:43:24.592488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-11 00:43:24.592502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-11 00:43:24.592515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-11 00:43:24.592529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-11 00:43:24.592542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-11 00:43:24.592556 | orchestrator | 2026-04-11 00:43:24.592569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592583 | orchestrator | Saturday 11 April 2026 00:43:18 +0000 (0:00:00.355) 0:00:01.027 ******** 2026-04-11 00:43:24.592595 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.592609 | orchestrator | 2026-04-11 00:43:24.592622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592637 | orchestrator | Saturday 11 April 2026 00:43:18 +0000 (0:00:00.375) 0:00:01.403 ******** 2026-04-11 00:43:24.592651 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.592666 | orchestrator | 2026-04-11 00:43:24.592680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592694 | orchestrator | Saturday 11 April 2026 00:43:19 +0000 (0:00:00.175) 0:00:01.578 ******** 2026-04-11 00:43:24.592707 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.592721 | orchestrator | 2026-04-11 00:43:24.592734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592748 | orchestrator | Saturday 11 April 2026 00:43:19 +0000 (0:00:00.170) 0:00:01.749 ******** 2026-04-11 00:43:24.592762 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.592775 | orchestrator | 2026-04-11 00:43:24.592789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592804 | orchestrator | Saturday 11 April 2026 00:43:19 +0000 (0:00:00.214) 0:00:01.963 ******** 2026-04-11 00:43:24.592818 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.592832 | orchestrator | 2026-04-11 00:43:24.592845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592859 | orchestrator | Saturday 11 April 2026 00:43:19 +0000 (0:00:00.167) 0:00:02.130 ******** 2026-04-11 00:43:24.592873 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.592907 | orchestrator | 2026-04-11 00:43:24.592921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592935 | orchestrator | Saturday 11 April 2026 00:43:19 +0000 (0:00:00.184) 0:00:02.315 ******** 2026-04-11 00:43:24.592949 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.592963 | orchestrator | 2026-04-11 00:43:24.592977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.592991 | orchestrator | Saturday 11 April 2026 00:43:20 +0000 (0:00:00.211) 0:00:02.527 ******** 2026-04-11 00:43:24.593004 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.593017 | orchestrator | 2026-04-11 00:43:24.593030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.593055 | orchestrator | Saturday 11 April 2026 00:43:20 +0000 (0:00:00.211) 0:00:02.739 ******** 2026-04-11 00:43:24.593068 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b) 2026-04-11 00:43:24.593084 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b) 2026-04-11 00:43:24.593097 | orchestrator | 2026-04-11 00:43:24.593110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.593143 | orchestrator | Saturday 11 April 2026 00:43:20 +0000 (0:00:00.373) 0:00:03.112 ******** 2026-04-11 00:43:24.593157 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684) 2026-04-11 00:43:24.593170 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684) 2026-04-11 00:43:24.593183 | orchestrator | 2026-04-11 00:43:24.593196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.593210 | orchestrator | Saturday 11 April 2026 00:43:21 +0000 (0:00:00.375) 0:00:03.487 ******** 2026-04-11 00:43:24.593223 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08) 2026-04-11 00:43:24.593236 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08) 2026-04-11 00:43:24.593250 | orchestrator | 2026-04-11 00:43:24.593263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.593276 | orchestrator | Saturday 11 April 2026 00:43:21 +0000 (0:00:00.502) 0:00:03.990 ******** 2026-04-11 00:43:24.593289 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547) 2026-04-11 00:43:24.593303 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547) 2026-04-11 00:43:24.593316 | orchestrator | 2026-04-11 00:43:24.593329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:24.593343 | orchestrator | Saturday 11 April 2026 00:43:22 +0000 (0:00:00.577) 0:00:04.567 ******** 2026-04-11 00:43:24.593356 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 00:43:24.593369 | orchestrator | 2026-04-11 00:43:24.593382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:24.593396 | orchestrator | Saturday 11 April 2026 00:43:22 +0000 (0:00:00.701) 0:00:05.269 ******** 2026-04-11 00:43:24.593409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-11 00:43:24.593423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-11 00:43:24.593436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-11 00:43:24.593449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-11 00:43:24.593462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-11 00:43:24.593475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-11 00:43:24.593489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-11 00:43:24.593502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-11 00:43:24.593515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-11 00:43:24.593528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-11 00:43:24.593541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-11 00:43:24.593555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-11 00:43:24.593585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-11 00:43:24.593598 | orchestrator | 2026-04-11 00:43:24.593612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:24.593625 | orchestrator | Saturday 11 April 2026 00:43:23 +0000 (0:00:00.410) 0:00:05.680 ******** 2026-04-11 00:43:24.593638 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.593651 | orchestrator | 2026-04-11 00:43:24.593664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:24.593677 | orchestrator | Saturday 11 April 2026 00:43:23 +0000 (0:00:00.194) 0:00:05.874 ******** 2026-04-11 00:43:24.593691 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.593704 | orchestrator | 2026-04-11 00:43:24.593717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:24.593730 | orchestrator | Saturday 11 April 2026 00:43:23 +0000 (0:00:00.185) 0:00:06.060 ******** 2026-04-11 00:43:24.593743 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.593756 | orchestrator | 2026-04-11 00:43:24.593770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:24.593783 | orchestrator | Saturday 11 April 2026 00:43:23 +0000 (0:00:00.204) 0:00:06.264 ******** 2026-04-11 00:43:24.593796 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.593809 | orchestrator | 2026-04-11 00:43:24.593822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:24.593836 | orchestrator | Saturday 11 April 2026 00:43:24 +0000 (0:00:00.184) 0:00:06.448 ******** 2026-04-11 00:43:24.593849 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.593862 | orchestrator | 2026-04-11 00:43:24.593875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:24.593914 | orchestrator | Saturday 11 April 2026 00:43:24 +0000 (0:00:00.188) 0:00:06.637 ******** 2026-04-11 00:43:24.593927 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.593940 | orchestrator | 2026-04-11 00:43:24.593954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:24.593967 | orchestrator | Saturday 11 April 2026 00:43:24 +0000 (0:00:00.192) 0:00:06.829 ******** 2026-04-11 00:43:24.593980 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:24.593994 | orchestrator | 2026-04-11 00:43:24.594104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:32.386592 | orchestrator | Saturday 11 April 2026 00:43:24 +0000 (0:00:00.191) 0:00:07.021 ******** 2026-04-11 00:43:32.386726 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.386754 | orchestrator | 2026-04-11 00:43:32.386774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:32.386795 | orchestrator | Saturday 11 April 2026 00:43:24 +0000 (0:00:00.176) 0:00:07.197 ******** 2026-04-11 00:43:32.386814 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-11 00:43:32.386833 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-11 00:43:32.386852 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-11 00:43:32.386960 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-11 00:43:32.386983 | orchestrator | 2026-04-11 00:43:32.387003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:32.387022 | orchestrator | Saturday 11 April 2026 00:43:25 +0000 (0:00:01.053) 0:00:08.250 ******** 2026-04-11 00:43:32.387041 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.387058 | orchestrator | 2026-04-11 00:43:32.387078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:32.387098 | orchestrator | Saturday 11 April 2026 00:43:26 +0000 (0:00:00.198) 0:00:08.449 ******** 2026-04-11 00:43:32.387117 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.387137 | orchestrator | 2026-04-11 00:43:32.387157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:32.387177 | orchestrator | Saturday 11 April 2026 00:43:26 +0000 (0:00:00.195) 0:00:08.644 ******** 2026-04-11 00:43:32.387229 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.387248 | orchestrator | 2026-04-11 00:43:32.387267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:32.387286 | orchestrator | Saturday 11 April 2026 00:43:26 +0000 (0:00:00.224) 0:00:08.869 ******** 2026-04-11 00:43:32.387304 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.387322 | orchestrator | 2026-04-11 00:43:32.387351 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-11 00:43:32.387370 | orchestrator | Saturday 11 April 2026 00:43:26 +0000 (0:00:00.196) 0:00:09.065 ******** 2026-04-11 00:43:32.387389 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.387407 | orchestrator | 2026-04-11 00:43:32.387425 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-11 00:43:32.387444 | orchestrator | Saturday 11 April 2026 00:43:26 +0000 (0:00:00.131) 0:00:09.197 ******** 2026-04-11 00:43:32.387463 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e341cbc8-c5ec-51c3-80df-3957b9d9208d'}}) 2026-04-11 00:43:32.387482 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '11d3af0d-f4a7-5343-8eab-aab153531328'}}) 2026-04-11 00:43:32.387500 | orchestrator | 2026-04-11 00:43:32.387519 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-11 00:43:32.387538 | orchestrator | Saturday 11 April 2026 00:43:26 +0000 (0:00:00.194) 0:00:09.391 ******** 2026-04-11 00:43:32.387558 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'}) 2026-04-11 00:43:32.387577 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'}) 2026-04-11 00:43:32.387593 | orchestrator | 2026-04-11 00:43:32.387611 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-11 00:43:32.387629 | orchestrator | Saturday 11 April 2026 00:43:28 +0000 (0:00:01.981) 0:00:11.373 ******** 2026-04-11 00:43:32.387647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:32.387668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:32.387685 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.387703 | orchestrator | 2026-04-11 00:43:32.387721 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-11 00:43:32.387739 | orchestrator | Saturday 11 April 2026 00:43:29 +0000 (0:00:00.142) 0:00:11.516 ******** 2026-04-11 00:43:32.387756 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'}) 2026-04-11 00:43:32.387774 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'}) 2026-04-11 00:43:32.387792 | orchestrator | 2026-04-11 00:43:32.387809 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-11 00:43:32.387827 | orchestrator | Saturday 11 April 2026 00:43:30 +0000 (0:00:01.408) 0:00:12.924 ******** 2026-04-11 00:43:32.387843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:32.387861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:32.387908 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.387927 | orchestrator | 2026-04-11 00:43:32.387945 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-11 00:43:32.387980 | orchestrator | Saturday 11 April 2026 00:43:30 +0000 (0:00:00.165) 0:00:13.090 ******** 2026-04-11 00:43:32.388029 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388049 | orchestrator | 2026-04-11 00:43:32.388069 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-11 00:43:32.388086 | orchestrator | Saturday 11 April 2026 00:43:30 +0000 (0:00:00.124) 0:00:13.214 ******** 2026-04-11 00:43:32.388105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:32.388123 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:32.388142 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388161 | orchestrator | 2026-04-11 00:43:32.388179 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-11 00:43:32.388198 | orchestrator | Saturday 11 April 2026 00:43:31 +0000 (0:00:00.317) 0:00:13.532 ******** 2026-04-11 00:43:32.388216 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388235 | orchestrator | 2026-04-11 00:43:32.388254 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-11 00:43:32.388273 | orchestrator | Saturday 11 April 2026 00:43:31 +0000 (0:00:00.131) 0:00:13.664 ******** 2026-04-11 00:43:32.388293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:32.388312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:32.388330 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388349 | orchestrator | 2026-04-11 00:43:32.388367 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-11 00:43:32.388383 | orchestrator | Saturday 11 April 2026 00:43:31 +0000 (0:00:00.143) 0:00:13.808 ******** 2026-04-11 00:43:32.388400 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388417 | orchestrator | 2026-04-11 00:43:32.388433 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-11 00:43:32.388448 | orchestrator | Saturday 11 April 2026 00:43:31 +0000 (0:00:00.123) 0:00:13.931 ******** 2026-04-11 00:43:32.388464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:32.388480 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:32.388496 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388513 | orchestrator | 2026-04-11 00:43:32.388530 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-11 00:43:32.388545 | orchestrator | Saturday 11 April 2026 00:43:31 +0000 (0:00:00.145) 0:00:14.076 ******** 2026-04-11 00:43:32.388562 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:32.388578 | orchestrator | 2026-04-11 00:43:32.388595 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-11 00:43:32.388610 | orchestrator | Saturday 11 April 2026 00:43:31 +0000 (0:00:00.154) 0:00:14.231 ******** 2026-04-11 00:43:32.388626 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:32.388643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:32.388659 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388674 | orchestrator | 2026-04-11 00:43:32.388690 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-11 00:43:32.388721 | orchestrator | Saturday 11 April 2026 00:43:31 +0000 (0:00:00.164) 0:00:14.396 ******** 2026-04-11 00:43:32.388737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:32.388754 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:32.388769 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388785 | orchestrator | 2026-04-11 00:43:32.388801 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-11 00:43:32.388818 | orchestrator | Saturday 11 April 2026 00:43:32 +0000 (0:00:00.145) 0:00:14.541 ******** 2026-04-11 00:43:32.388834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:32.388850 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:32.388865 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388911 | orchestrator | 2026-04-11 00:43:32.388928 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-11 00:43:32.388944 | orchestrator | Saturday 11 April 2026 00:43:32 +0000 (0:00:00.150) 0:00:14.692 ******** 2026-04-11 00:43:32.388961 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:32.388977 | orchestrator | 2026-04-11 00:43:32.388991 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-11 00:43:32.389023 | orchestrator | Saturday 11 April 2026 00:43:32 +0000 (0:00:00.123) 0:00:14.816 ******** 2026-04-11 00:43:38.441613 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.441726 | orchestrator | 2026-04-11 00:43:38.441749 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-11 00:43:38.441769 | orchestrator | Saturday 11 April 2026 00:43:32 +0000 (0:00:00.130) 0:00:14.947 ******** 2026-04-11 00:43:38.441785 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.441801 | orchestrator | 2026-04-11 00:43:38.441818 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-11 00:43:38.441834 | orchestrator | Saturday 11 April 2026 00:43:32 +0000 (0:00:00.132) 0:00:15.079 ******** 2026-04-11 00:43:38.441850 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 00:43:38.441932 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-11 00:43:38.441954 | orchestrator | } 2026-04-11 00:43:38.441973 | orchestrator | 2026-04-11 00:43:38.441990 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-11 00:43:38.442007 | orchestrator | Saturday 11 April 2026 00:43:32 +0000 (0:00:00.355) 0:00:15.435 ******** 2026-04-11 00:43:38.442092 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 00:43:38.442111 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-11 00:43:38.442129 | orchestrator | } 2026-04-11 00:43:38.442140 | orchestrator | 2026-04-11 00:43:38.442150 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-11 00:43:38.442162 | orchestrator | Saturday 11 April 2026 00:43:33 +0000 (0:00:00.125) 0:00:15.561 ******** 2026-04-11 00:43:38.442174 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 00:43:38.442185 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-11 00:43:38.442216 | orchestrator | } 2026-04-11 00:43:38.442227 | orchestrator | 2026-04-11 00:43:38.442239 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-11 00:43:38.442249 | orchestrator | Saturday 11 April 2026 00:43:33 +0000 (0:00:00.139) 0:00:15.701 ******** 2026-04-11 00:43:38.442260 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:38.442272 | orchestrator | 2026-04-11 00:43:38.442287 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-11 00:43:38.442299 | orchestrator | Saturday 11 April 2026 00:43:33 +0000 (0:00:00.674) 0:00:16.375 ******** 2026-04-11 00:43:38.442309 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:38.442342 | orchestrator | 2026-04-11 00:43:38.442353 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-11 00:43:38.442364 | orchestrator | Saturday 11 April 2026 00:43:34 +0000 (0:00:00.509) 0:00:16.885 ******** 2026-04-11 00:43:38.442375 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:38.442386 | orchestrator | 2026-04-11 00:43:38.442396 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-11 00:43:38.442407 | orchestrator | Saturday 11 April 2026 00:43:34 +0000 (0:00:00.499) 0:00:17.385 ******** 2026-04-11 00:43:38.442418 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:38.442428 | orchestrator | 2026-04-11 00:43:38.442439 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-11 00:43:38.442450 | orchestrator | Saturday 11 April 2026 00:43:35 +0000 (0:00:00.152) 0:00:17.537 ******** 2026-04-11 00:43:38.442461 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442471 | orchestrator | 2026-04-11 00:43:38.442480 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-11 00:43:38.442489 | orchestrator | Saturday 11 April 2026 00:43:35 +0000 (0:00:00.105) 0:00:17.643 ******** 2026-04-11 00:43:38.442499 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442508 | orchestrator | 2026-04-11 00:43:38.442518 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-11 00:43:38.442527 | orchestrator | Saturday 11 April 2026 00:43:35 +0000 (0:00:00.100) 0:00:17.743 ******** 2026-04-11 00:43:38.442537 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 00:43:38.442546 | orchestrator |  "vgs_report": { 2026-04-11 00:43:38.442557 | orchestrator |  "vg": [] 2026-04-11 00:43:38.442567 | orchestrator |  } 2026-04-11 00:43:38.442576 | orchestrator | } 2026-04-11 00:43:38.442586 | orchestrator | 2026-04-11 00:43:38.442595 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-11 00:43:38.442605 | orchestrator | Saturday 11 April 2026 00:43:35 +0000 (0:00:00.118) 0:00:17.862 ******** 2026-04-11 00:43:38.442614 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442623 | orchestrator | 2026-04-11 00:43:38.442633 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-11 00:43:38.442643 | orchestrator | Saturday 11 April 2026 00:43:35 +0000 (0:00:00.124) 0:00:17.986 ******** 2026-04-11 00:43:38.442653 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442662 | orchestrator | 2026-04-11 00:43:38.442672 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-11 00:43:38.442681 | orchestrator | Saturday 11 April 2026 00:43:35 +0000 (0:00:00.117) 0:00:18.104 ******** 2026-04-11 00:43:38.442691 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442700 | orchestrator | 2026-04-11 00:43:38.442709 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-11 00:43:38.442719 | orchestrator | Saturday 11 April 2026 00:43:35 +0000 (0:00:00.130) 0:00:18.234 ******** 2026-04-11 00:43:38.442728 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442737 | orchestrator | 2026-04-11 00:43:38.442747 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-11 00:43:38.442756 | orchestrator | Saturday 11 April 2026 00:43:36 +0000 (0:00:00.317) 0:00:18.552 ******** 2026-04-11 00:43:38.442765 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442775 | orchestrator | 2026-04-11 00:43:38.442784 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-11 00:43:38.442794 | orchestrator | Saturday 11 April 2026 00:43:36 +0000 (0:00:00.126) 0:00:18.678 ******** 2026-04-11 00:43:38.442803 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442812 | orchestrator | 2026-04-11 00:43:38.442822 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-11 00:43:38.442831 | orchestrator | Saturday 11 April 2026 00:43:36 +0000 (0:00:00.130) 0:00:18.809 ******** 2026-04-11 00:43:38.442840 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.442850 | orchestrator | 2026-04-11 00:43:38.442859 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-11 00:43:38.443055 | orchestrator | Saturday 11 April 2026 00:43:36 +0000 (0:00:00.143) 0:00:18.952 ******** 2026-04-11 00:43:38.443095 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443106 | orchestrator | 2026-04-11 00:43:38.443116 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-11 00:43:38.443126 | orchestrator | Saturday 11 April 2026 00:43:36 +0000 (0:00:00.126) 0:00:19.079 ******** 2026-04-11 00:43:38.443135 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443145 | orchestrator | 2026-04-11 00:43:38.443154 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-11 00:43:38.443164 | orchestrator | Saturday 11 April 2026 00:43:36 +0000 (0:00:00.140) 0:00:19.219 ******** 2026-04-11 00:43:38.443174 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443184 | orchestrator | 2026-04-11 00:43:38.443193 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-11 00:43:38.443203 | orchestrator | Saturday 11 April 2026 00:43:36 +0000 (0:00:00.151) 0:00:19.371 ******** 2026-04-11 00:43:38.443213 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443222 | orchestrator | 2026-04-11 00:43:38.443231 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-11 00:43:38.443241 | orchestrator | Saturday 11 April 2026 00:43:37 +0000 (0:00:00.131) 0:00:19.502 ******** 2026-04-11 00:43:38.443250 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443260 | orchestrator | 2026-04-11 00:43:38.443269 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-11 00:43:38.443279 | orchestrator | Saturday 11 April 2026 00:43:37 +0000 (0:00:00.120) 0:00:19.622 ******** 2026-04-11 00:43:38.443288 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443298 | orchestrator | 2026-04-11 00:43:38.443307 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-11 00:43:38.443317 | orchestrator | Saturday 11 April 2026 00:43:37 +0000 (0:00:00.119) 0:00:19.742 ******** 2026-04-11 00:43:38.443327 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443336 | orchestrator | 2026-04-11 00:43:38.443353 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-11 00:43:38.443362 | orchestrator | Saturday 11 April 2026 00:43:37 +0000 (0:00:00.122) 0:00:19.865 ******** 2026-04-11 00:43:38.443373 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:38.443384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:38.443394 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443403 | orchestrator | 2026-04-11 00:43:38.443413 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-11 00:43:38.443423 | orchestrator | Saturday 11 April 2026 00:43:37 +0000 (0:00:00.140) 0:00:20.005 ******** 2026-04-11 00:43:38.443432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:38.443442 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:38.443452 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443461 | orchestrator | 2026-04-11 00:43:38.443471 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-11 00:43:38.443481 | orchestrator | Saturday 11 April 2026 00:43:37 +0000 (0:00:00.332) 0:00:20.338 ******** 2026-04-11 00:43:38.443490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:38.443500 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:38.443516 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443525 | orchestrator | 2026-04-11 00:43:38.443535 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-11 00:43:38.443544 | orchestrator | Saturday 11 April 2026 00:43:38 +0000 (0:00:00.160) 0:00:20.498 ******** 2026-04-11 00:43:38.443554 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:38.443564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:38.443573 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443582 | orchestrator | 2026-04-11 00:43:38.443592 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-11 00:43:38.443601 | orchestrator | Saturday 11 April 2026 00:43:38 +0000 (0:00:00.149) 0:00:20.647 ******** 2026-04-11 00:43:38.443611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:38.443621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:38.443630 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:38.443639 | orchestrator | 2026-04-11 00:43:38.443649 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-11 00:43:38.443659 | orchestrator | Saturday 11 April 2026 00:43:38 +0000 (0:00:00.163) 0:00:20.811 ******** 2026-04-11 00:43:38.443675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:43.357006 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:43.357081 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:43.357088 | orchestrator | 2026-04-11 00:43:43.357094 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-11 00:43:43.357099 | orchestrator | Saturday 11 April 2026 00:43:38 +0000 (0:00:00.150) 0:00:20.962 ******** 2026-04-11 00:43:43.357104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:43.357108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:43.357112 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:43.357115 | orchestrator | 2026-04-11 00:43:43.357120 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-11 00:43:43.357123 | orchestrator | Saturday 11 April 2026 00:43:38 +0000 (0:00:00.147) 0:00:21.109 ******** 2026-04-11 00:43:43.357127 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:43.357131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:43.357135 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:43.357138 | orchestrator | 2026-04-11 00:43:43.357142 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-11 00:43:43.357146 | orchestrator | Saturday 11 April 2026 00:43:38 +0000 (0:00:00.148) 0:00:21.258 ******** 2026-04-11 00:43:43.357150 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:43.357163 | orchestrator | 2026-04-11 00:43:43.357167 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-11 00:43:43.357193 | orchestrator | Saturday 11 April 2026 00:43:39 +0000 (0:00:00.523) 0:00:21.781 ******** 2026-04-11 00:43:43.357197 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:43.357201 | orchestrator | 2026-04-11 00:43:43.357205 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-11 00:43:43.357209 | orchestrator | Saturday 11 April 2026 00:43:39 +0000 (0:00:00.511) 0:00:22.292 ******** 2026-04-11 00:43:43.357213 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:43:43.357217 | orchestrator | 2026-04-11 00:43:43.357221 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-11 00:43:43.357225 | orchestrator | Saturday 11 April 2026 00:43:40 +0000 (0:00:00.144) 0:00:22.437 ******** 2026-04-11 00:43:43.357229 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'vg_name': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'}) 2026-04-11 00:43:43.357234 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'vg_name': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'}) 2026-04-11 00:43:43.357238 | orchestrator | 2026-04-11 00:43:43.357241 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-11 00:43:43.357246 | orchestrator | Saturday 11 April 2026 00:43:40 +0000 (0:00:00.197) 0:00:22.635 ******** 2026-04-11 00:43:43.357249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:43.357253 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:43.357257 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:43.357261 | orchestrator | 2026-04-11 00:43:43.357265 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-11 00:43:43.357269 | orchestrator | Saturday 11 April 2026 00:43:40 +0000 (0:00:00.156) 0:00:22.791 ******** 2026-04-11 00:43:43.357284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:43.357288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:43.357292 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:43.357296 | orchestrator | 2026-04-11 00:43:43.357300 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-11 00:43:43.357303 | orchestrator | Saturday 11 April 2026 00:43:40 +0000 (0:00:00.347) 0:00:23.138 ******** 2026-04-11 00:43:43.357307 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'})  2026-04-11 00:43:43.357311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'})  2026-04-11 00:43:43.357315 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:43:43.357318 | orchestrator | 2026-04-11 00:43:43.357322 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-11 00:43:43.357326 | orchestrator | Saturday 11 April 2026 00:43:40 +0000 (0:00:00.156) 0:00:23.295 ******** 2026-04-11 00:43:43.357340 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 00:43:43.357344 | orchestrator |  "lvm_report": { 2026-04-11 00:43:43.357348 | orchestrator |  "lv": [ 2026-04-11 00:43:43.357352 | orchestrator |  { 2026-04-11 00:43:43.357356 | orchestrator |  "lv_name": "osd-block-11d3af0d-f4a7-5343-8eab-aab153531328", 2026-04-11 00:43:43.357360 | orchestrator |  "vg_name": "ceph-11d3af0d-f4a7-5343-8eab-aab153531328" 2026-04-11 00:43:43.357364 | orchestrator |  }, 2026-04-11 00:43:43.357368 | orchestrator |  { 2026-04-11 00:43:43.357375 | orchestrator |  "lv_name": "osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d", 2026-04-11 00:43:43.357379 | orchestrator |  "vg_name": "ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d" 2026-04-11 00:43:43.357383 | orchestrator |  } 2026-04-11 00:43:43.357387 | orchestrator |  ], 2026-04-11 00:43:43.357390 | orchestrator |  "pv": [ 2026-04-11 00:43:43.357394 | orchestrator |  { 2026-04-11 00:43:43.357398 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-11 00:43:43.357402 | orchestrator |  "vg_name": "ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d" 2026-04-11 00:43:43.357406 | orchestrator |  }, 2026-04-11 00:43:43.357409 | orchestrator |  { 2026-04-11 00:43:43.357413 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-11 00:43:43.357417 | orchestrator |  "vg_name": "ceph-11d3af0d-f4a7-5343-8eab-aab153531328" 2026-04-11 00:43:43.357421 | orchestrator |  } 2026-04-11 00:43:43.357424 | orchestrator |  ] 2026-04-11 00:43:43.357428 | orchestrator |  } 2026-04-11 00:43:43.357432 | orchestrator | } 2026-04-11 00:43:43.357436 | orchestrator | 2026-04-11 00:43:43.357440 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-11 00:43:43.357443 | orchestrator | 2026-04-11 00:43:43.357447 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 00:43:43.357453 | orchestrator | Saturday 11 April 2026 00:43:41 +0000 (0:00:00.330) 0:00:23.626 ******** 2026-04-11 00:43:43.357457 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-11 00:43:43.357461 | orchestrator | 2026-04-11 00:43:43.357465 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 00:43:43.357469 | orchestrator | Saturday 11 April 2026 00:43:41 +0000 (0:00:00.245) 0:00:23.872 ******** 2026-04-11 00:43:43.357472 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:43:43.357476 | orchestrator | 2026-04-11 00:43:43.357480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:43.357484 | orchestrator | Saturday 11 April 2026 00:43:41 +0000 (0:00:00.204) 0:00:24.076 ******** 2026-04-11 00:43:43.357488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-11 00:43:43.357491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-11 00:43:43.357495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-11 00:43:43.357499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-11 00:43:43.357503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-11 00:43:43.357506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-11 00:43:43.357510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-11 00:43:43.357514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-11 00:43:43.357517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-11 00:43:43.357521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-11 00:43:43.357525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-11 00:43:43.357529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-11 00:43:43.357532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-11 00:43:43.357536 | orchestrator | 2026-04-11 00:43:43.357540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:43.357544 | orchestrator | Saturday 11 April 2026 00:43:41 +0000 (0:00:00.355) 0:00:24.432 ******** 2026-04-11 00:43:43.357547 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:43.357551 | orchestrator | 2026-04-11 00:43:43.357555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:43.357562 | orchestrator | Saturday 11 April 2026 00:43:42 +0000 (0:00:00.169) 0:00:24.602 ******** 2026-04-11 00:43:43.357566 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:43.357571 | orchestrator | 2026-04-11 00:43:43.357575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:43.357579 | orchestrator | Saturday 11 April 2026 00:43:42 +0000 (0:00:00.174) 0:00:24.777 ******** 2026-04-11 00:43:43.357583 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:43.357587 | orchestrator | 2026-04-11 00:43:43.357592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:43.357596 | orchestrator | Saturday 11 April 2026 00:43:42 +0000 (0:00:00.180) 0:00:24.957 ******** 2026-04-11 00:43:43.357600 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:43.357604 | orchestrator | 2026-04-11 00:43:43.357608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:43.357613 | orchestrator | Saturday 11 April 2026 00:43:42 +0000 (0:00:00.459) 0:00:25.416 ******** 2026-04-11 00:43:43.357617 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:43.357621 | orchestrator | 2026-04-11 00:43:43.357626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:43.357630 | orchestrator | Saturday 11 April 2026 00:43:43 +0000 (0:00:00.185) 0:00:25.602 ******** 2026-04-11 00:43:43.357634 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:43.357638 | orchestrator | 2026-04-11 00:43:43.357645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:52.670318 | orchestrator | Saturday 11 April 2026 00:43:43 +0000 (0:00:00.183) 0:00:25.786 ******** 2026-04-11 00:43:52.670395 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670405 | orchestrator | 2026-04-11 00:43:52.670413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:52.670420 | orchestrator | Saturday 11 April 2026 00:43:43 +0000 (0:00:00.184) 0:00:25.970 ******** 2026-04-11 00:43:52.670426 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670433 | orchestrator | 2026-04-11 00:43:52.670439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:52.670446 | orchestrator | Saturday 11 April 2026 00:43:43 +0000 (0:00:00.169) 0:00:26.140 ******** 2026-04-11 00:43:52.670452 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22) 2026-04-11 00:43:52.670460 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22) 2026-04-11 00:43:52.670467 | orchestrator | 2026-04-11 00:43:52.670473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:52.670479 | orchestrator | Saturday 11 April 2026 00:43:44 +0000 (0:00:00.361) 0:00:26.501 ******** 2026-04-11 00:43:52.670485 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5) 2026-04-11 00:43:52.670492 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5) 2026-04-11 00:43:52.670498 | orchestrator | 2026-04-11 00:43:52.670517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:52.670524 | orchestrator | Saturday 11 April 2026 00:43:44 +0000 (0:00:00.385) 0:00:26.886 ******** 2026-04-11 00:43:52.670530 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86) 2026-04-11 00:43:52.670536 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86) 2026-04-11 00:43:52.670543 | orchestrator | 2026-04-11 00:43:52.670549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:52.670555 | orchestrator | Saturday 11 April 2026 00:43:44 +0000 (0:00:00.386) 0:00:27.273 ******** 2026-04-11 00:43:52.670561 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b) 2026-04-11 00:43:52.670585 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b) 2026-04-11 00:43:52.670591 | orchestrator | 2026-04-11 00:43:52.670598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:43:52.670604 | orchestrator | Saturday 11 April 2026 00:43:45 +0000 (0:00:00.402) 0:00:27.676 ******** 2026-04-11 00:43:52.670610 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 00:43:52.670616 | orchestrator | 2026-04-11 00:43:52.670623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.670629 | orchestrator | Saturday 11 April 2026 00:43:45 +0000 (0:00:00.290) 0:00:27.966 ******** 2026-04-11 00:43:52.670635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-11 00:43:52.670646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-11 00:43:52.670656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-11 00:43:52.670663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-11 00:43:52.670669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-11 00:43:52.670675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-11 00:43:52.670681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-11 00:43:52.670687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-11 00:43:52.670694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-11 00:43:52.670700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-11 00:43:52.670706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-11 00:43:52.670712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-11 00:43:52.670718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-11 00:43:52.670724 | orchestrator | 2026-04-11 00:43:52.670731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.670738 | orchestrator | Saturday 11 April 2026 00:43:46 +0000 (0:00:00.485) 0:00:28.452 ******** 2026-04-11 00:43:52.670749 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670755 | orchestrator | 2026-04-11 00:43:52.670761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.670768 | orchestrator | Saturday 11 April 2026 00:43:46 +0000 (0:00:00.176) 0:00:28.629 ******** 2026-04-11 00:43:52.670774 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670780 | orchestrator | 2026-04-11 00:43:52.670786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.670792 | orchestrator | Saturday 11 April 2026 00:43:46 +0000 (0:00:00.174) 0:00:28.803 ******** 2026-04-11 00:43:52.670798 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670804 | orchestrator | 2026-04-11 00:43:52.670823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.670830 | orchestrator | Saturday 11 April 2026 00:43:46 +0000 (0:00:00.170) 0:00:28.973 ******** 2026-04-11 00:43:52.670836 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670843 | orchestrator | 2026-04-11 00:43:52.670898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.670909 | orchestrator | Saturday 11 April 2026 00:43:46 +0000 (0:00:00.177) 0:00:29.151 ******** 2026-04-11 00:43:52.670917 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670924 | orchestrator | 2026-04-11 00:43:52.670931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.670944 | orchestrator | Saturday 11 April 2026 00:43:46 +0000 (0:00:00.221) 0:00:29.372 ******** 2026-04-11 00:43:52.670951 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670958 | orchestrator | 2026-04-11 00:43:52.670965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.670972 | orchestrator | Saturday 11 April 2026 00:43:47 +0000 (0:00:00.177) 0:00:29.549 ******** 2026-04-11 00:43:52.670979 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.670986 | orchestrator | 2026-04-11 00:43:52.670993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.671000 | orchestrator | Saturday 11 April 2026 00:43:47 +0000 (0:00:00.181) 0:00:29.731 ******** 2026-04-11 00:43:52.671006 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.671013 | orchestrator | 2026-04-11 00:43:52.671020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.671028 | orchestrator | Saturday 11 April 2026 00:43:47 +0000 (0:00:00.193) 0:00:29.924 ******** 2026-04-11 00:43:52.671035 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-11 00:43:52.671043 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-11 00:43:52.671050 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-11 00:43:52.671056 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-11 00:43:52.671062 | orchestrator | 2026-04-11 00:43:52.671069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.671075 | orchestrator | Saturday 11 April 2026 00:43:48 +0000 (0:00:00.713) 0:00:30.637 ******** 2026-04-11 00:43:52.671081 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.671087 | orchestrator | 2026-04-11 00:43:52.671093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.671099 | orchestrator | Saturday 11 April 2026 00:43:48 +0000 (0:00:00.178) 0:00:30.816 ******** 2026-04-11 00:43:52.671105 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.671111 | orchestrator | 2026-04-11 00:43:52.671118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.671124 | orchestrator | Saturday 11 April 2026 00:43:48 +0000 (0:00:00.177) 0:00:30.993 ******** 2026-04-11 00:43:52.671130 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.671136 | orchestrator | 2026-04-11 00:43:52.671142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:43:52.671150 | orchestrator | Saturday 11 April 2026 00:43:49 +0000 (0:00:00.471) 0:00:31.465 ******** 2026-04-11 00:43:52.671160 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.671170 | orchestrator | 2026-04-11 00:43:52.671179 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-11 00:43:52.671187 | orchestrator | Saturday 11 April 2026 00:43:49 +0000 (0:00:00.186) 0:00:31.651 ******** 2026-04-11 00:43:52.671196 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.671204 | orchestrator | 2026-04-11 00:43:52.671213 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-11 00:43:52.671222 | orchestrator | Saturday 11 April 2026 00:43:49 +0000 (0:00:00.119) 0:00:31.771 ******** 2026-04-11 00:43:52.671247 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}}) 2026-04-11 00:43:52.671257 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '296a8e73-f310-5078-85ef-ed795f213448'}}) 2026-04-11 00:43:52.671266 | orchestrator | 2026-04-11 00:43:52.671275 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-11 00:43:52.671284 | orchestrator | Saturday 11 April 2026 00:43:49 +0000 (0:00:00.161) 0:00:31.933 ******** 2026-04-11 00:43:52.671295 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}) 2026-04-11 00:43:52.671305 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'}) 2026-04-11 00:43:52.671321 | orchestrator | 2026-04-11 00:43:52.671330 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-11 00:43:52.671340 | orchestrator | Saturday 11 April 2026 00:43:51 +0000 (0:00:01.762) 0:00:33.696 ******** 2026-04-11 00:43:52.671350 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:43:52.671360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:43:52.671370 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:52.671379 | orchestrator | 2026-04-11 00:43:52.671389 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-11 00:43:52.671399 | orchestrator | Saturday 11 April 2026 00:43:51 +0000 (0:00:00.151) 0:00:33.848 ******** 2026-04-11 00:43:52.671409 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}) 2026-04-11 00:43:52.671428 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'}) 2026-04-11 00:43:57.829941 | orchestrator | 2026-04-11 00:43:57.830074 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-11 00:43:57.830087 | orchestrator | Saturday 11 April 2026 00:43:52 +0000 (0:00:01.330) 0:00:35.178 ******** 2026-04-11 00:43:57.830094 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:43:57.830103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:43:57.830110 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830117 | orchestrator | 2026-04-11 00:43:57.830123 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-11 00:43:57.830129 | orchestrator | Saturday 11 April 2026 00:43:52 +0000 (0:00:00.166) 0:00:35.345 ******** 2026-04-11 00:43:57.830136 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830142 | orchestrator | 2026-04-11 00:43:57.830148 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-11 00:43:57.830154 | orchestrator | Saturday 11 April 2026 00:43:53 +0000 (0:00:00.134) 0:00:35.480 ******** 2026-04-11 00:43:57.830173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:43:57.830180 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:43:57.830186 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830193 | orchestrator | 2026-04-11 00:43:57.830199 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-11 00:43:57.830205 | orchestrator | Saturday 11 April 2026 00:43:53 +0000 (0:00:00.169) 0:00:35.650 ******** 2026-04-11 00:43:57.830211 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830217 | orchestrator | 2026-04-11 00:43:57.830223 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-11 00:43:57.830230 | orchestrator | Saturday 11 April 2026 00:43:53 +0000 (0:00:00.155) 0:00:35.805 ******** 2026-04-11 00:43:57.830236 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:43:57.830242 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:43:57.830248 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830270 | orchestrator | 2026-04-11 00:43:57.830277 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-11 00:43:57.830283 | orchestrator | Saturday 11 April 2026 00:43:53 +0000 (0:00:00.141) 0:00:35.946 ******** 2026-04-11 00:43:57.830289 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830295 | orchestrator | 2026-04-11 00:43:57.830302 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-11 00:43:57.830309 | orchestrator | Saturday 11 April 2026 00:43:53 +0000 (0:00:00.367) 0:00:36.313 ******** 2026-04-11 00:43:57.830315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:43:57.830321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:43:57.830327 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830333 | orchestrator | 2026-04-11 00:43:57.830339 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-11 00:43:57.830346 | orchestrator | Saturday 11 April 2026 00:43:54 +0000 (0:00:00.156) 0:00:36.469 ******** 2026-04-11 00:43:57.830352 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:43:57.830359 | orchestrator | 2026-04-11 00:43:57.830366 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-11 00:43:57.830372 | orchestrator | Saturday 11 April 2026 00:43:54 +0000 (0:00:00.134) 0:00:36.604 ******** 2026-04-11 00:43:57.830378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:43:57.830384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:43:57.830390 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830397 | orchestrator | 2026-04-11 00:43:57.830403 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-11 00:43:57.830409 | orchestrator | Saturday 11 April 2026 00:43:54 +0000 (0:00:00.141) 0:00:36.745 ******** 2026-04-11 00:43:57.830415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:43:57.830421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:43:57.830427 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830434 | orchestrator | 2026-04-11 00:43:57.830440 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-11 00:43:57.830461 | orchestrator | Saturday 11 April 2026 00:43:54 +0000 (0:00:00.151) 0:00:36.897 ******** 2026-04-11 00:43:57.830469 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:43:57.830476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:43:57.830483 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830490 | orchestrator | 2026-04-11 00:43:57.830497 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-11 00:43:57.830505 | orchestrator | Saturday 11 April 2026 00:43:54 +0000 (0:00:00.146) 0:00:37.044 ******** 2026-04-11 00:43:57.830511 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830518 | orchestrator | 2026-04-11 00:43:57.830525 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-11 00:43:57.830532 | orchestrator | Saturday 11 April 2026 00:43:54 +0000 (0:00:00.126) 0:00:37.170 ******** 2026-04-11 00:43:57.830539 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830551 | orchestrator | 2026-04-11 00:43:57.830558 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-11 00:43:57.830565 | orchestrator | Saturday 11 April 2026 00:43:54 +0000 (0:00:00.136) 0:00:37.307 ******** 2026-04-11 00:43:57.830576 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830583 | orchestrator | 2026-04-11 00:43:57.830591 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-11 00:43:57.830599 | orchestrator | Saturday 11 April 2026 00:43:54 +0000 (0:00:00.117) 0:00:37.425 ******** 2026-04-11 00:43:57.830606 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 00:43:57.830614 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-11 00:43:57.830622 | orchestrator | } 2026-04-11 00:43:57.830630 | orchestrator | 2026-04-11 00:43:57.830637 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-11 00:43:57.830645 | orchestrator | Saturday 11 April 2026 00:43:55 +0000 (0:00:00.121) 0:00:37.546 ******** 2026-04-11 00:43:57.830653 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 00:43:57.830661 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-11 00:43:57.830668 | orchestrator | } 2026-04-11 00:43:57.830676 | orchestrator | 2026-04-11 00:43:57.830684 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-11 00:43:57.830691 | orchestrator | Saturday 11 April 2026 00:43:55 +0000 (0:00:00.126) 0:00:37.673 ******** 2026-04-11 00:43:57.830698 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 00:43:57.830706 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-11 00:43:57.830714 | orchestrator | } 2026-04-11 00:43:57.830722 | orchestrator | 2026-04-11 00:43:57.830729 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-11 00:43:57.830737 | orchestrator | Saturday 11 April 2026 00:43:55 +0000 (0:00:00.121) 0:00:37.795 ******** 2026-04-11 00:43:57.830744 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:43:57.830752 | orchestrator | 2026-04-11 00:43:57.830759 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-11 00:43:57.830767 | orchestrator | Saturday 11 April 2026 00:43:55 +0000 (0:00:00.601) 0:00:38.396 ******** 2026-04-11 00:43:57.830775 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:43:57.830782 | orchestrator | 2026-04-11 00:43:57.830790 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-11 00:43:57.830798 | orchestrator | Saturday 11 April 2026 00:43:56 +0000 (0:00:00.462) 0:00:38.859 ******** 2026-04-11 00:43:57.830805 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:43:57.830813 | orchestrator | 2026-04-11 00:43:57.830821 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-11 00:43:57.830828 | orchestrator | Saturday 11 April 2026 00:43:56 +0000 (0:00:00.471) 0:00:39.330 ******** 2026-04-11 00:43:57.830834 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:43:57.830858 | orchestrator | 2026-04-11 00:43:57.830865 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-11 00:43:57.830872 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.124) 0:00:39.455 ******** 2026-04-11 00:43:57.830879 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830886 | orchestrator | 2026-04-11 00:43:57.830892 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-11 00:43:57.830899 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.103) 0:00:39.559 ******** 2026-04-11 00:43:57.830906 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.830913 | orchestrator | 2026-04-11 00:43:57.830920 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-11 00:43:57.830927 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.093) 0:00:39.652 ******** 2026-04-11 00:43:57.830933 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 00:43:57.830940 | orchestrator |  "vgs_report": { 2026-04-11 00:43:57.830948 | orchestrator |  "vg": [] 2026-04-11 00:43:57.830955 | orchestrator |  } 2026-04-11 00:43:57.830962 | orchestrator | } 2026-04-11 00:43:57.830969 | orchestrator | 2026-04-11 00:43:57.830976 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-11 00:43:57.830988 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.127) 0:00:39.780 ******** 2026-04-11 00:43:57.830994 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.831001 | orchestrator | 2026-04-11 00:43:57.831008 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-11 00:43:57.831015 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.120) 0:00:39.900 ******** 2026-04-11 00:43:57.831022 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.831028 | orchestrator | 2026-04-11 00:43:57.831035 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-11 00:43:57.831042 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.119) 0:00:40.020 ******** 2026-04-11 00:43:57.831049 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.831055 | orchestrator | 2026-04-11 00:43:57.831062 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-11 00:43:57.831069 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.123) 0:00:40.143 ******** 2026-04-11 00:43:57.831076 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:43:57.831083 | orchestrator | 2026-04-11 00:43:57.831094 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-11 00:44:01.772567 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.117) 0:00:40.261 ******** 2026-04-11 00:44:01.772731 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.772751 | orchestrator | 2026-04-11 00:44:01.772766 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-11 00:44:01.772781 | orchestrator | Saturday 11 April 2026 00:43:57 +0000 (0:00:00.121) 0:00:40.383 ******** 2026-04-11 00:44:01.772795 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.772809 | orchestrator | 2026-04-11 00:44:01.772823 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-11 00:44:01.772893 | orchestrator | Saturday 11 April 2026 00:43:58 +0000 (0:00:00.264) 0:00:40.647 ******** 2026-04-11 00:44:01.772911 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.772925 | orchestrator | 2026-04-11 00:44:01.772939 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-11 00:44:01.772952 | orchestrator | Saturday 11 April 2026 00:43:58 +0000 (0:00:00.127) 0:00:40.775 ******** 2026-04-11 00:44:01.772965 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.772978 | orchestrator | 2026-04-11 00:44:01.772992 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-11 00:44:01.773006 | orchestrator | Saturday 11 April 2026 00:43:58 +0000 (0:00:00.121) 0:00:40.896 ******** 2026-04-11 00:44:01.773020 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773030 | orchestrator | 2026-04-11 00:44:01.773039 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-11 00:44:01.773047 | orchestrator | Saturday 11 April 2026 00:43:58 +0000 (0:00:00.123) 0:00:41.019 ******** 2026-04-11 00:44:01.773055 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773063 | orchestrator | 2026-04-11 00:44:01.773072 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-11 00:44:01.773085 | orchestrator | Saturday 11 April 2026 00:43:58 +0000 (0:00:00.127) 0:00:41.147 ******** 2026-04-11 00:44:01.773099 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773112 | orchestrator | 2026-04-11 00:44:01.773133 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-11 00:44:01.773147 | orchestrator | Saturday 11 April 2026 00:43:58 +0000 (0:00:00.120) 0:00:41.267 ******** 2026-04-11 00:44:01.773161 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773173 | orchestrator | 2026-04-11 00:44:01.773186 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-11 00:44:01.773198 | orchestrator | Saturday 11 April 2026 00:43:58 +0000 (0:00:00.122) 0:00:41.390 ******** 2026-04-11 00:44:01.773212 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773224 | orchestrator | 2026-04-11 00:44:01.773267 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-11 00:44:01.773282 | orchestrator | Saturday 11 April 2026 00:43:59 +0000 (0:00:00.137) 0:00:41.527 ******** 2026-04-11 00:44:01.773295 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773308 | orchestrator | 2026-04-11 00:44:01.773321 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-11 00:44:01.773334 | orchestrator | Saturday 11 April 2026 00:43:59 +0000 (0:00:00.132) 0:00:41.660 ******** 2026-04-11 00:44:01.773350 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.773365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.773378 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773392 | orchestrator | 2026-04-11 00:44:01.773405 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-11 00:44:01.773418 | orchestrator | Saturday 11 April 2026 00:43:59 +0000 (0:00:00.145) 0:00:41.806 ******** 2026-04-11 00:44:01.773431 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.773445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.773457 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773470 | orchestrator | 2026-04-11 00:44:01.773484 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-11 00:44:01.773497 | orchestrator | Saturday 11 April 2026 00:43:59 +0000 (0:00:00.134) 0:00:41.940 ******** 2026-04-11 00:44:01.773562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.773577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.773590 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773603 | orchestrator | 2026-04-11 00:44:01.773615 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-11 00:44:01.773627 | orchestrator | Saturday 11 April 2026 00:43:59 +0000 (0:00:00.140) 0:00:42.080 ******** 2026-04-11 00:44:01.773639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.773651 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.773664 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773677 | orchestrator | 2026-04-11 00:44:01.773713 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-11 00:44:01.773727 | orchestrator | Saturday 11 April 2026 00:43:59 +0000 (0:00:00.257) 0:00:42.338 ******** 2026-04-11 00:44:01.773740 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.773753 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.773766 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773779 | orchestrator | 2026-04-11 00:44:01.773792 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-11 00:44:01.773805 | orchestrator | Saturday 11 April 2026 00:44:00 +0000 (0:00:00.138) 0:00:42.477 ******** 2026-04-11 00:44:01.773819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.773911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.773927 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.773940 | orchestrator | 2026-04-11 00:44:01.773953 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-11 00:44:01.773967 | orchestrator | Saturday 11 April 2026 00:44:00 +0000 (0:00:00.129) 0:00:42.607 ******** 2026-04-11 00:44:01.773980 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.773994 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.774007 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.774090 | orchestrator | 2026-04-11 00:44:01.774106 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-11 00:44:01.774120 | orchestrator | Saturday 11 April 2026 00:44:00 +0000 (0:00:00.122) 0:00:42.730 ******** 2026-04-11 00:44:01.774135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.774150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.774164 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.774177 | orchestrator | 2026-04-11 00:44:01.774191 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-11 00:44:01.774205 | orchestrator | Saturday 11 April 2026 00:44:00 +0000 (0:00:00.122) 0:00:42.852 ******** 2026-04-11 00:44:01.774217 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:44:01.774231 | orchestrator | 2026-04-11 00:44:01.774243 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-11 00:44:01.774256 | orchestrator | Saturday 11 April 2026 00:44:00 +0000 (0:00:00.454) 0:00:43.307 ******** 2026-04-11 00:44:01.774268 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:44:01.774281 | orchestrator | 2026-04-11 00:44:01.774294 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-11 00:44:01.774308 | orchestrator | Saturday 11 April 2026 00:44:01 +0000 (0:00:00.464) 0:00:43.772 ******** 2026-04-11 00:44:01.774322 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:44:01.774337 | orchestrator | 2026-04-11 00:44:01.774351 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-11 00:44:01.774366 | orchestrator | Saturday 11 April 2026 00:44:01 +0000 (0:00:00.121) 0:00:43.893 ******** 2026-04-11 00:44:01.774381 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'vg_name': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'}) 2026-04-11 00:44:01.774397 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'vg_name': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}) 2026-04-11 00:44:01.774411 | orchestrator | 2026-04-11 00:44:01.774426 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-11 00:44:01.774440 | orchestrator | Saturday 11 April 2026 00:44:01 +0000 (0:00:00.147) 0:00:44.041 ******** 2026-04-11 00:44:01.774455 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.774470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:01.774484 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:01.774499 | orchestrator | 2026-04-11 00:44:01.774524 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-11 00:44:01.774539 | orchestrator | Saturday 11 April 2026 00:44:01 +0000 (0:00:00.112) 0:00:44.153 ******** 2026-04-11 00:44:01.774553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:01.774580 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:07.146388 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:07.146493 | orchestrator | 2026-04-11 00:44:07.146510 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-11 00:44:07.146523 | orchestrator | Saturday 11 April 2026 00:44:01 +0000 (0:00:00.125) 0:00:44.279 ******** 2026-04-11 00:44:07.146535 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'})  2026-04-11 00:44:07.146548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'})  2026-04-11 00:44:07.146559 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:07.146570 | orchestrator | 2026-04-11 00:44:07.146582 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-11 00:44:07.146593 | orchestrator | Saturday 11 April 2026 00:44:01 +0000 (0:00:00.131) 0:00:44.410 ******** 2026-04-11 00:44:07.146604 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 00:44:07.146615 | orchestrator |  "lvm_report": { 2026-04-11 00:44:07.146627 | orchestrator |  "lv": [ 2026-04-11 00:44:07.146638 | orchestrator |  { 2026-04-11 00:44:07.146665 | orchestrator |  "lv_name": "osd-block-296a8e73-f310-5078-85ef-ed795f213448", 2026-04-11 00:44:07.146678 | orchestrator |  "vg_name": "ceph-296a8e73-f310-5078-85ef-ed795f213448" 2026-04-11 00:44:07.146689 | orchestrator |  }, 2026-04-11 00:44:07.146700 | orchestrator |  { 2026-04-11 00:44:07.146711 | orchestrator |  "lv_name": "osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1", 2026-04-11 00:44:07.146722 | orchestrator |  "vg_name": "ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1" 2026-04-11 00:44:07.146733 | orchestrator |  } 2026-04-11 00:44:07.146743 | orchestrator |  ], 2026-04-11 00:44:07.146754 | orchestrator |  "pv": [ 2026-04-11 00:44:07.146765 | orchestrator |  { 2026-04-11 00:44:07.146776 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-11 00:44:07.146787 | orchestrator |  "vg_name": "ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1" 2026-04-11 00:44:07.146798 | orchestrator |  }, 2026-04-11 00:44:07.146808 | orchestrator |  { 2026-04-11 00:44:07.146819 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-11 00:44:07.146884 | orchestrator |  "vg_name": "ceph-296a8e73-f310-5078-85ef-ed795f213448" 2026-04-11 00:44:07.146907 | orchestrator |  } 2026-04-11 00:44:07.146928 | orchestrator |  ] 2026-04-11 00:44:07.146948 | orchestrator |  } 2026-04-11 00:44:07.146963 | orchestrator | } 2026-04-11 00:44:07.146976 | orchestrator | 2026-04-11 00:44:07.146989 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-11 00:44:07.147002 | orchestrator | 2026-04-11 00:44:07.147014 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 00:44:07.147027 | orchestrator | Saturday 11 April 2026 00:44:02 +0000 (0:00:00.381) 0:00:44.792 ******** 2026-04-11 00:44:07.147039 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-11 00:44:07.147052 | orchestrator | 2026-04-11 00:44:07.147064 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 00:44:07.147077 | orchestrator | Saturday 11 April 2026 00:44:02 +0000 (0:00:00.232) 0:00:45.024 ******** 2026-04-11 00:44:07.147091 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:07.147130 | orchestrator | 2026-04-11 00:44:07.147143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147156 | orchestrator | Saturday 11 April 2026 00:44:02 +0000 (0:00:00.230) 0:00:45.255 ******** 2026-04-11 00:44:07.147169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-11 00:44:07.147181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-11 00:44:07.147193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-11 00:44:07.147205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-11 00:44:07.147223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-11 00:44:07.147236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-11 00:44:07.147249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-11 00:44:07.147262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-11 00:44:07.147274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-11 00:44:07.147287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-11 00:44:07.147300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-11 00:44:07.147313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-11 00:44:07.147325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-11 00:44:07.147336 | orchestrator | 2026-04-11 00:44:07.147347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147358 | orchestrator | Saturday 11 April 2026 00:44:03 +0000 (0:00:00.413) 0:00:45.668 ******** 2026-04-11 00:44:07.147369 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:07.147380 | orchestrator | 2026-04-11 00:44:07.147390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147401 | orchestrator | Saturday 11 April 2026 00:44:03 +0000 (0:00:00.201) 0:00:45.870 ******** 2026-04-11 00:44:07.147412 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:07.147423 | orchestrator | 2026-04-11 00:44:07.147434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147464 | orchestrator | Saturday 11 April 2026 00:44:03 +0000 (0:00:00.193) 0:00:46.064 ******** 2026-04-11 00:44:07.147475 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:07.147486 | orchestrator | 2026-04-11 00:44:07.147497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147508 | orchestrator | Saturday 11 April 2026 00:44:03 +0000 (0:00:00.188) 0:00:46.253 ******** 2026-04-11 00:44:07.147519 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:07.147530 | orchestrator | 2026-04-11 00:44:07.147541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147552 | orchestrator | Saturday 11 April 2026 00:44:04 +0000 (0:00:00.187) 0:00:46.440 ******** 2026-04-11 00:44:07.147562 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:07.147573 | orchestrator | 2026-04-11 00:44:07.147584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147601 | orchestrator | Saturday 11 April 2026 00:44:04 +0000 (0:00:00.220) 0:00:46.660 ******** 2026-04-11 00:44:07.147619 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:07.147635 | orchestrator | 2026-04-11 00:44:07.147651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147668 | orchestrator | Saturday 11 April 2026 00:44:04 +0000 (0:00:00.441) 0:00:47.102 ******** 2026-04-11 00:44:07.147707 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:07.147726 | orchestrator | 2026-04-11 00:44:07.147758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147775 | orchestrator | Saturday 11 April 2026 00:44:04 +0000 (0:00:00.171) 0:00:47.273 ******** 2026-04-11 00:44:07.147791 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:07.147809 | orchestrator | 2026-04-11 00:44:07.147864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147901 | orchestrator | Saturday 11 April 2026 00:44:05 +0000 (0:00:00.178) 0:00:47.452 ******** 2026-04-11 00:44:07.147920 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c) 2026-04-11 00:44:07.147941 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c) 2026-04-11 00:44:07.147952 | orchestrator | 2026-04-11 00:44:07.147963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.147974 | orchestrator | Saturday 11 April 2026 00:44:05 +0000 (0:00:00.366) 0:00:47.818 ******** 2026-04-11 00:44:07.147985 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9) 2026-04-11 00:44:07.147996 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9) 2026-04-11 00:44:07.148007 | orchestrator | 2026-04-11 00:44:07.148018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.148043 | orchestrator | Saturday 11 April 2026 00:44:05 +0000 (0:00:00.385) 0:00:48.204 ******** 2026-04-11 00:44:07.148054 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995) 2026-04-11 00:44:07.148075 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995) 2026-04-11 00:44:07.148086 | orchestrator | 2026-04-11 00:44:07.148097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.148108 | orchestrator | Saturday 11 April 2026 00:44:06 +0000 (0:00:00.379) 0:00:48.584 ******** 2026-04-11 00:44:07.148118 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514) 2026-04-11 00:44:07.148129 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514) 2026-04-11 00:44:07.148140 | orchestrator | 2026-04-11 00:44:07.148151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 00:44:07.148162 | orchestrator | Saturday 11 April 2026 00:44:06 +0000 (0:00:00.403) 0:00:48.987 ******** 2026-04-11 00:44:07.148173 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 00:44:07.148184 | orchestrator | 2026-04-11 00:44:07.148195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:07.148206 | orchestrator | Saturday 11 April 2026 00:44:06 +0000 (0:00:00.300) 0:00:49.288 ******** 2026-04-11 00:44:07.148217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-11 00:44:07.148228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-11 00:44:07.148239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-11 00:44:07.148250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-11 00:44:07.148261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-11 00:44:07.148271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-11 00:44:07.148282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-11 00:44:07.148293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-11 00:44:07.148304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-11 00:44:07.148323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-11 00:44:07.148335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-11 00:44:07.148356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-11 00:44:14.966558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-11 00:44:14.966638 | orchestrator | 2026-04-11 00:44:14.966646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966651 | orchestrator | Saturday 11 April 2026 00:44:07 +0000 (0:00:00.365) 0:00:49.653 ******** 2026-04-11 00:44:14.966656 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966662 | orchestrator | 2026-04-11 00:44:14.966670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966678 | orchestrator | Saturday 11 April 2026 00:44:07 +0000 (0:00:00.179) 0:00:49.833 ******** 2026-04-11 00:44:14.966686 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966691 | orchestrator | 2026-04-11 00:44:14.966695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966700 | orchestrator | Saturday 11 April 2026 00:44:07 +0000 (0:00:00.176) 0:00:50.009 ******** 2026-04-11 00:44:14.966704 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966708 | orchestrator | 2026-04-11 00:44:14.966713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966718 | orchestrator | Saturday 11 April 2026 00:44:08 +0000 (0:00:00.477) 0:00:50.486 ******** 2026-04-11 00:44:14.966722 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966726 | orchestrator | 2026-04-11 00:44:14.966730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966734 | orchestrator | Saturday 11 April 2026 00:44:08 +0000 (0:00:00.189) 0:00:50.676 ******** 2026-04-11 00:44:14.966739 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966743 | orchestrator | 2026-04-11 00:44:14.966747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966751 | orchestrator | Saturday 11 April 2026 00:44:08 +0000 (0:00:00.163) 0:00:50.840 ******** 2026-04-11 00:44:14.966755 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966759 | orchestrator | 2026-04-11 00:44:14.966763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966767 | orchestrator | Saturday 11 April 2026 00:44:08 +0000 (0:00:00.173) 0:00:51.014 ******** 2026-04-11 00:44:14.966772 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966776 | orchestrator | 2026-04-11 00:44:14.966780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966797 | orchestrator | Saturday 11 April 2026 00:44:08 +0000 (0:00:00.173) 0:00:51.187 ******** 2026-04-11 00:44:14.966802 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966806 | orchestrator | 2026-04-11 00:44:14.966810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966814 | orchestrator | Saturday 11 April 2026 00:44:08 +0000 (0:00:00.178) 0:00:51.365 ******** 2026-04-11 00:44:14.966818 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-11 00:44:14.966874 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-11 00:44:14.966879 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-11 00:44:14.966884 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-11 00:44:14.966888 | orchestrator | 2026-04-11 00:44:14.966892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966896 | orchestrator | Saturday 11 April 2026 00:44:09 +0000 (0:00:00.591) 0:00:51.956 ******** 2026-04-11 00:44:14.966900 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966904 | orchestrator | 2026-04-11 00:44:14.966909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966913 | orchestrator | Saturday 11 April 2026 00:44:09 +0000 (0:00:00.183) 0:00:52.140 ******** 2026-04-11 00:44:14.966935 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966939 | orchestrator | 2026-04-11 00:44:14.966943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966948 | orchestrator | Saturday 11 April 2026 00:44:09 +0000 (0:00:00.197) 0:00:52.338 ******** 2026-04-11 00:44:14.966952 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966956 | orchestrator | 2026-04-11 00:44:14.966960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 00:44:14.966964 | orchestrator | Saturday 11 April 2026 00:44:10 +0000 (0:00:00.187) 0:00:52.525 ******** 2026-04-11 00:44:14.966968 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966972 | orchestrator | 2026-04-11 00:44:14.966976 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-11 00:44:14.966980 | orchestrator | Saturday 11 April 2026 00:44:10 +0000 (0:00:00.172) 0:00:52.697 ******** 2026-04-11 00:44:14.966985 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.966989 | orchestrator | 2026-04-11 00:44:14.966993 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-11 00:44:14.966997 | orchestrator | Saturday 11 April 2026 00:44:10 +0000 (0:00:00.145) 0:00:52.843 ******** 2026-04-11 00:44:14.967001 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}}) 2026-04-11 00:44:14.967006 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}}) 2026-04-11 00:44:14.967010 | orchestrator | 2026-04-11 00:44:14.967014 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-11 00:44:14.967018 | orchestrator | Saturday 11 April 2026 00:44:10 +0000 (0:00:00.287) 0:00:53.130 ******** 2026-04-11 00:44:14.967023 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}) 2026-04-11 00:44:14.967028 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}) 2026-04-11 00:44:14.967032 | orchestrator | 2026-04-11 00:44:14.967036 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-11 00:44:14.967051 | orchestrator | Saturday 11 April 2026 00:44:12 +0000 (0:00:01.822) 0:00:54.953 ******** 2026-04-11 00:44:14.967056 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:14.967061 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:14.967065 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.967069 | orchestrator | 2026-04-11 00:44:14.967073 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-11 00:44:14.967077 | orchestrator | Saturday 11 April 2026 00:44:12 +0000 (0:00:00.151) 0:00:55.104 ******** 2026-04-11 00:44:14.967082 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}) 2026-04-11 00:44:14.967089 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}) 2026-04-11 00:44:14.967093 | orchestrator | 2026-04-11 00:44:14.967098 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-11 00:44:14.967102 | orchestrator | Saturday 11 April 2026 00:44:13 +0000 (0:00:01.249) 0:00:56.353 ******** 2026-04-11 00:44:14.967106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:14.967110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:14.967118 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.967122 | orchestrator | 2026-04-11 00:44:14.967127 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-11 00:44:14.967131 | orchestrator | Saturday 11 April 2026 00:44:14 +0000 (0:00:00.137) 0:00:56.490 ******** 2026-04-11 00:44:14.967135 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.967141 | orchestrator | 2026-04-11 00:44:14.967148 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-11 00:44:14.967155 | orchestrator | Saturday 11 April 2026 00:44:14 +0000 (0:00:00.125) 0:00:56.616 ******** 2026-04-11 00:44:14.967161 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:14.967167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:14.967174 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.967180 | orchestrator | 2026-04-11 00:44:14.967186 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-11 00:44:14.967193 | orchestrator | Saturday 11 April 2026 00:44:14 +0000 (0:00:00.137) 0:00:56.753 ******** 2026-04-11 00:44:14.967199 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.967206 | orchestrator | 2026-04-11 00:44:14.967213 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-11 00:44:14.967220 | orchestrator | Saturday 11 April 2026 00:44:14 +0000 (0:00:00.119) 0:00:56.872 ******** 2026-04-11 00:44:14.967226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:14.967234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:14.967239 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.967246 | orchestrator | 2026-04-11 00:44:14.967252 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-11 00:44:14.967259 | orchestrator | Saturday 11 April 2026 00:44:14 +0000 (0:00:00.115) 0:00:56.988 ******** 2026-04-11 00:44:14.967265 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.967272 | orchestrator | 2026-04-11 00:44:14.967277 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-11 00:44:14.967284 | orchestrator | Saturday 11 April 2026 00:44:14 +0000 (0:00:00.129) 0:00:57.118 ******** 2026-04-11 00:44:14.967291 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:14.967297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:14.967304 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:14.967311 | orchestrator | 2026-04-11 00:44:14.967318 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-11 00:44:14.967325 | orchestrator | Saturday 11 April 2026 00:44:14 +0000 (0:00:00.121) 0:00:57.240 ******** 2026-04-11 00:44:14.967332 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:14.967338 | orchestrator | 2026-04-11 00:44:14.967342 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-11 00:44:14.967346 | orchestrator | Saturday 11 April 2026 00:44:14 +0000 (0:00:00.100) 0:00:57.340 ******** 2026-04-11 00:44:14.967356 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:20.870186 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:20.870313 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.870329 | orchestrator | 2026-04-11 00:44:20.870340 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-11 00:44:20.870352 | orchestrator | Saturday 11 April 2026 00:44:15 +0000 (0:00:00.269) 0:00:57.610 ******** 2026-04-11 00:44:20.870363 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:20.870373 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:20.870383 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.870392 | orchestrator | 2026-04-11 00:44:20.870416 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-11 00:44:20.870426 | orchestrator | Saturday 11 April 2026 00:44:15 +0000 (0:00:00.153) 0:00:57.764 ******** 2026-04-11 00:44:20.870436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:20.870446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:20.870455 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.870465 | orchestrator | 2026-04-11 00:44:20.870475 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-11 00:44:20.870484 | orchestrator | Saturday 11 April 2026 00:44:15 +0000 (0:00:00.178) 0:00:57.942 ******** 2026-04-11 00:44:20.870494 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.870504 | orchestrator | 2026-04-11 00:44:20.870513 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-11 00:44:20.870523 | orchestrator | Saturday 11 April 2026 00:44:15 +0000 (0:00:00.130) 0:00:58.073 ******** 2026-04-11 00:44:20.870532 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.870542 | orchestrator | 2026-04-11 00:44:20.870552 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-11 00:44:20.870561 | orchestrator | Saturday 11 April 2026 00:44:15 +0000 (0:00:00.130) 0:00:58.203 ******** 2026-04-11 00:44:20.870571 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.870580 | orchestrator | 2026-04-11 00:44:20.870591 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-11 00:44:20.870601 | orchestrator | Saturday 11 April 2026 00:44:15 +0000 (0:00:00.137) 0:00:58.341 ******** 2026-04-11 00:44:20.870611 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 00:44:20.870622 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-11 00:44:20.870631 | orchestrator | } 2026-04-11 00:44:20.870642 | orchestrator | 2026-04-11 00:44:20.870651 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-11 00:44:20.870661 | orchestrator | Saturday 11 April 2026 00:44:16 +0000 (0:00:00.130) 0:00:58.471 ******** 2026-04-11 00:44:20.870673 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 00:44:20.870685 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-11 00:44:20.870696 | orchestrator | } 2026-04-11 00:44:20.870707 | orchestrator | 2026-04-11 00:44:20.870718 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-11 00:44:20.870730 | orchestrator | Saturday 11 April 2026 00:44:16 +0000 (0:00:00.138) 0:00:58.609 ******** 2026-04-11 00:44:20.870740 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 00:44:20.870751 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-11 00:44:20.870762 | orchestrator | } 2026-04-11 00:44:20.870772 | orchestrator | 2026-04-11 00:44:20.870783 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-11 00:44:20.870794 | orchestrator | Saturday 11 April 2026 00:44:16 +0000 (0:00:00.143) 0:00:58.753 ******** 2026-04-11 00:44:20.870840 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:20.870852 | orchestrator | 2026-04-11 00:44:20.870864 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-11 00:44:20.870874 | orchestrator | Saturday 11 April 2026 00:44:16 +0000 (0:00:00.570) 0:00:59.324 ******** 2026-04-11 00:44:20.870886 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:20.870897 | orchestrator | 2026-04-11 00:44:20.870909 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-11 00:44:20.870919 | orchestrator | Saturday 11 April 2026 00:44:17 +0000 (0:00:00.479) 0:00:59.804 ******** 2026-04-11 00:44:20.870928 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:20.870938 | orchestrator | 2026-04-11 00:44:20.870947 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-11 00:44:20.870957 | orchestrator | Saturday 11 April 2026 00:44:17 +0000 (0:00:00.494) 0:01:00.298 ******** 2026-04-11 00:44:20.870967 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:20.870976 | orchestrator | 2026-04-11 00:44:20.870986 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-11 00:44:20.870995 | orchestrator | Saturday 11 April 2026 00:44:18 +0000 (0:00:00.323) 0:01:00.621 ******** 2026-04-11 00:44:20.871005 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871014 | orchestrator | 2026-04-11 00:44:20.871024 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-11 00:44:20.871034 | orchestrator | Saturday 11 April 2026 00:44:18 +0000 (0:00:00.116) 0:01:00.737 ******** 2026-04-11 00:44:20.871043 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871053 | orchestrator | 2026-04-11 00:44:20.871063 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-11 00:44:20.871072 | orchestrator | Saturday 11 April 2026 00:44:18 +0000 (0:00:00.110) 0:01:00.848 ******** 2026-04-11 00:44:20.871082 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 00:44:20.871092 | orchestrator |  "vgs_report": { 2026-04-11 00:44:20.871103 | orchestrator |  "vg": [] 2026-04-11 00:44:20.871131 | orchestrator |  } 2026-04-11 00:44:20.871142 | orchestrator | } 2026-04-11 00:44:20.871152 | orchestrator | 2026-04-11 00:44:20.871161 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-11 00:44:20.871171 | orchestrator | Saturday 11 April 2026 00:44:18 +0000 (0:00:00.171) 0:01:01.019 ******** 2026-04-11 00:44:20.871181 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871191 | orchestrator | 2026-04-11 00:44:20.871201 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-11 00:44:20.871210 | orchestrator | Saturday 11 April 2026 00:44:18 +0000 (0:00:00.129) 0:01:01.149 ******** 2026-04-11 00:44:20.871220 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871230 | orchestrator | 2026-04-11 00:44:20.871239 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-11 00:44:20.871249 | orchestrator | Saturday 11 April 2026 00:44:18 +0000 (0:00:00.131) 0:01:01.281 ******** 2026-04-11 00:44:20.871259 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871268 | orchestrator | 2026-04-11 00:44:20.871278 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-11 00:44:20.871287 | orchestrator | Saturday 11 April 2026 00:44:18 +0000 (0:00:00.123) 0:01:01.404 ******** 2026-04-11 00:44:20.871303 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871313 | orchestrator | 2026-04-11 00:44:20.871323 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-11 00:44:20.871332 | orchestrator | Saturday 11 April 2026 00:44:19 +0000 (0:00:00.137) 0:01:01.542 ******** 2026-04-11 00:44:20.871342 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871352 | orchestrator | 2026-04-11 00:44:20.871362 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-11 00:44:20.871372 | orchestrator | Saturday 11 April 2026 00:44:19 +0000 (0:00:00.121) 0:01:01.663 ******** 2026-04-11 00:44:20.871389 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871405 | orchestrator | 2026-04-11 00:44:20.871431 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-11 00:44:20.871448 | orchestrator | Saturday 11 April 2026 00:44:19 +0000 (0:00:00.124) 0:01:01.788 ******** 2026-04-11 00:44:20.871464 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871480 | orchestrator | 2026-04-11 00:44:20.871497 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-11 00:44:20.871514 | orchestrator | Saturday 11 April 2026 00:44:19 +0000 (0:00:00.131) 0:01:01.920 ******** 2026-04-11 00:44:20.871531 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871542 | orchestrator | 2026-04-11 00:44:20.871552 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-11 00:44:20.871562 | orchestrator | Saturday 11 April 2026 00:44:19 +0000 (0:00:00.115) 0:01:02.036 ******** 2026-04-11 00:44:20.871572 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871581 | orchestrator | 2026-04-11 00:44:20.871591 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-11 00:44:20.871601 | orchestrator | Saturday 11 April 2026 00:44:19 +0000 (0:00:00.285) 0:01:02.321 ******** 2026-04-11 00:44:20.871611 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871620 | orchestrator | 2026-04-11 00:44:20.871630 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-11 00:44:20.871640 | orchestrator | Saturday 11 April 2026 00:44:20 +0000 (0:00:00.134) 0:01:02.455 ******** 2026-04-11 00:44:20.871649 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871659 | orchestrator | 2026-04-11 00:44:20.871668 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-11 00:44:20.871678 | orchestrator | Saturday 11 April 2026 00:44:20 +0000 (0:00:00.131) 0:01:02.587 ******** 2026-04-11 00:44:20.871688 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871697 | orchestrator | 2026-04-11 00:44:20.871707 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-11 00:44:20.871717 | orchestrator | Saturday 11 April 2026 00:44:20 +0000 (0:00:00.123) 0:01:02.710 ******** 2026-04-11 00:44:20.871738 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871748 | orchestrator | 2026-04-11 00:44:20.871758 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-11 00:44:20.871768 | orchestrator | Saturday 11 April 2026 00:44:20 +0000 (0:00:00.125) 0:01:02.836 ******** 2026-04-11 00:44:20.871777 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871787 | orchestrator | 2026-04-11 00:44:20.871797 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-11 00:44:20.871807 | orchestrator | Saturday 11 April 2026 00:44:20 +0000 (0:00:00.120) 0:01:02.957 ******** 2026-04-11 00:44:20.871837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:20.871848 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:20.871858 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871868 | orchestrator | 2026-04-11 00:44:20.871877 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-11 00:44:20.871887 | orchestrator | Saturday 11 April 2026 00:44:20 +0000 (0:00:00.138) 0:01:03.095 ******** 2026-04-11 00:44:20.871897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:20.871906 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:20.871916 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:20.871926 | orchestrator | 2026-04-11 00:44:20.871935 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-11 00:44:20.871945 | orchestrator | Saturday 11 April 2026 00:44:20 +0000 (0:00:00.136) 0:01:03.232 ******** 2026-04-11 00:44:20.871972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838238 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838251 | orchestrator | 2026-04-11 00:44:23.838259 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-11 00:44:23.838269 | orchestrator | Saturday 11 April 2026 00:44:20 +0000 (0:00:00.143) 0:01:03.375 ******** 2026-04-11 00:44:23.838276 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838285 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838292 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838299 | orchestrator | 2026-04-11 00:44:23.838307 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-11 00:44:23.838314 | orchestrator | Saturday 11 April 2026 00:44:21 +0000 (0:00:00.134) 0:01:03.509 ******** 2026-04-11 00:44:23.838322 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838336 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838343 | orchestrator | 2026-04-11 00:44:23.838351 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-11 00:44:23.838359 | orchestrator | Saturday 11 April 2026 00:44:21 +0000 (0:00:00.150) 0:01:03.660 ******** 2026-04-11 00:44:23.838366 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838407 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838422 | orchestrator | 2026-04-11 00:44:23.838435 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-11 00:44:23.838447 | orchestrator | Saturday 11 April 2026 00:44:21 +0000 (0:00:00.142) 0:01:03.803 ******** 2026-04-11 00:44:23.838460 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838486 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838499 | orchestrator | 2026-04-11 00:44:23.838512 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-11 00:44:23.838524 | orchestrator | Saturday 11 April 2026 00:44:21 +0000 (0:00:00.350) 0:01:04.153 ******** 2026-04-11 00:44:23.838532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838546 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838554 | orchestrator | 2026-04-11 00:44:23.838561 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-11 00:44:23.838586 | orchestrator | Saturday 11 April 2026 00:44:21 +0000 (0:00:00.138) 0:01:04.291 ******** 2026-04-11 00:44:23.838595 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:23.838608 | orchestrator | 2026-04-11 00:44:23.838620 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-11 00:44:23.838631 | orchestrator | Saturday 11 April 2026 00:44:22 +0000 (0:00:00.522) 0:01:04.814 ******** 2026-04-11 00:44:23.838643 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:23.838654 | orchestrator | 2026-04-11 00:44:23.838666 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-11 00:44:23.838679 | orchestrator | Saturday 11 April 2026 00:44:22 +0000 (0:00:00.495) 0:01:05.310 ******** 2026-04-11 00:44:23.838691 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:23.838703 | orchestrator | 2026-04-11 00:44:23.838714 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-11 00:44:23.838722 | orchestrator | Saturday 11 April 2026 00:44:23 +0000 (0:00:00.155) 0:01:05.466 ******** 2026-04-11 00:44:23.838732 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'vg_name': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}) 2026-04-11 00:44:23.838746 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'vg_name': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}) 2026-04-11 00:44:23.838759 | orchestrator | 2026-04-11 00:44:23.838772 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-11 00:44:23.838785 | orchestrator | Saturday 11 April 2026 00:44:23 +0000 (0:00:00.172) 0:01:05.638 ******** 2026-04-11 00:44:23.838844 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838862 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838869 | orchestrator | 2026-04-11 00:44:23.838877 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-11 00:44:23.838884 | orchestrator | Saturday 11 April 2026 00:44:23 +0000 (0:00:00.155) 0:01:05.794 ******** 2026-04-11 00:44:23.838897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838905 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838912 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838919 | orchestrator | 2026-04-11 00:44:23.838927 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-11 00:44:23.838934 | orchestrator | Saturday 11 April 2026 00:44:23 +0000 (0:00:00.184) 0:01:05.979 ******** 2026-04-11 00:44:23.838941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'})  2026-04-11 00:44:23.838948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'})  2026-04-11 00:44:23.838956 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:23.838963 | orchestrator | 2026-04-11 00:44:23.838970 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-11 00:44:23.838977 | orchestrator | Saturday 11 April 2026 00:44:23 +0000 (0:00:00.142) 0:01:06.122 ******** 2026-04-11 00:44:23.838985 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 00:44:23.838997 | orchestrator |  "lvm_report": { 2026-04-11 00:44:23.839009 | orchestrator |  "lv": [ 2026-04-11 00:44:23.839023 | orchestrator |  { 2026-04-11 00:44:23.839045 | orchestrator |  "lv_name": "osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62", 2026-04-11 00:44:23.839054 | orchestrator |  "vg_name": "ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62" 2026-04-11 00:44:23.839062 | orchestrator |  }, 2026-04-11 00:44:23.839069 | orchestrator |  { 2026-04-11 00:44:23.839076 | orchestrator |  "lv_name": "osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05", 2026-04-11 00:44:23.839083 | orchestrator |  "vg_name": "ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05" 2026-04-11 00:44:23.839090 | orchestrator |  } 2026-04-11 00:44:23.839098 | orchestrator |  ], 2026-04-11 00:44:23.839105 | orchestrator |  "pv": [ 2026-04-11 00:44:23.839112 | orchestrator |  { 2026-04-11 00:44:23.839119 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-11 00:44:23.839126 | orchestrator |  "vg_name": "ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05" 2026-04-11 00:44:23.839133 | orchestrator |  }, 2026-04-11 00:44:23.839141 | orchestrator |  { 2026-04-11 00:44:23.839150 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-11 00:44:23.839161 | orchestrator |  "vg_name": "ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62" 2026-04-11 00:44:23.839173 | orchestrator |  } 2026-04-11 00:44:23.839185 | orchestrator |  ] 2026-04-11 00:44:23.839197 | orchestrator |  } 2026-04-11 00:44:23.839210 | orchestrator | } 2026-04-11 00:44:23.839221 | orchestrator | 2026-04-11 00:44:23.839231 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:44:23.839239 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-11 00:44:23.839246 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-11 00:44:23.839253 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-11 00:44:23.839261 | orchestrator | 2026-04-11 00:44:23.839268 | orchestrator | 2026-04-11 00:44:23.839275 | orchestrator | 2026-04-11 00:44:23.839283 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:44:23.839290 | orchestrator | Saturday 11 April 2026 00:44:23 +0000 (0:00:00.130) 0:01:06.252 ******** 2026-04-11 00:44:23.839297 | orchestrator | =============================================================================== 2026-04-11 00:44:23.839304 | orchestrator | Create block VGs -------------------------------------------------------- 5.57s 2026-04-11 00:44:23.839311 | orchestrator | Create block LVs -------------------------------------------------------- 3.99s 2026-04-11 00:44:23.839318 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.85s 2026-04-11 00:44:23.839327 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2026-04-11 00:44:23.839339 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.47s 2026-04-11 00:44:23.839350 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.46s 2026-04-11 00:44:23.839362 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.45s 2026-04-11 00:44:23.839375 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2026-04-11 00:44:23.839395 | orchestrator | Add known links to the list of available block devices ------------------ 1.12s 2026-04-11 00:44:24.170316 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-04-11 00:44:24.170407 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2026-04-11 00:44:24.170419 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-11 00:44:24.170428 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-11 00:44:24.170437 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2026-04-11 00:44:24.170470 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.66s 2026-04-11 00:44:24.170480 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.64s 2026-04-11 00:44:24.170488 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2026-04-11 00:44:24.170510 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.63s 2026-04-11 00:44:24.170519 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.62s 2026-04-11 00:44:24.170528 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.62s 2026-04-11 00:44:35.743131 | orchestrator | 2026-04-11 00:44:35 | INFO  | Prepare task for execution of facts. 2026-04-11 00:44:35.824059 | orchestrator | 2026-04-11 00:44:35 | INFO  | Task 94a21e11-0228-4c68-853f-13688a2ea0c0 (facts) was prepared for execution. 2026-04-11 00:44:35.824140 | orchestrator | 2026-04-11 00:44:35 | INFO  | It takes a moment until task 94a21e11-0228-4c68-853f-13688a2ea0c0 (facts) has been started and output is visible here. 2026-04-11 00:44:47.777769 | orchestrator | 2026-04-11 00:44:47.777925 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-11 00:44:47.777944 | orchestrator | 2026-04-11 00:44:47.777958 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 00:44:47.777971 | orchestrator | Saturday 11 April 2026 00:44:39 +0000 (0:00:00.341) 0:00:00.341 ******** 2026-04-11 00:44:47.777984 | orchestrator | ok: [testbed-manager] 2026-04-11 00:44:47.777996 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:44:47.778008 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:44:47.778085 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:44:47.778109 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:44:47.778120 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:44:47.778132 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:47.778142 | orchestrator | 2026-04-11 00:44:47.778155 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 00:44:47.778167 | orchestrator | Saturday 11 April 2026 00:44:40 +0000 (0:00:01.317) 0:00:01.658 ******** 2026-04-11 00:44:47.778179 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:44:47.778192 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:44:47.778205 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:44:47.778217 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:44:47.778228 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:44:47.778239 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:47.778250 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:47.778262 | orchestrator | 2026-04-11 00:44:47.778274 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 00:44:47.778286 | orchestrator | 2026-04-11 00:44:47.778297 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 00:44:47.778309 | orchestrator | Saturday 11 April 2026 00:44:41 +0000 (0:00:01.186) 0:00:02.845 ******** 2026-04-11 00:44:47.778323 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:44:47.778336 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:44:47.778350 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:44:47.778364 | orchestrator | ok: [testbed-manager] 2026-04-11 00:44:47.778375 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:44:47.778388 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:44:47.778399 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:44:47.778411 | orchestrator | 2026-04-11 00:44:47.778424 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 00:44:47.778437 | orchestrator | 2026-04-11 00:44:47.778449 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 00:44:47.778461 | orchestrator | Saturday 11 April 2026 00:44:47 +0000 (0:00:05.505) 0:00:08.350 ******** 2026-04-11 00:44:47.778473 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:44:47.778485 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:44:47.778497 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:44:47.778537 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:44:47.778551 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:44:47.778565 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:44:47.778578 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:44:47.778592 | orchestrator | 2026-04-11 00:44:47.778605 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:44:47.778619 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:44:47.778633 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:44:47.778645 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:44:47.778657 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:44:47.778670 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:44:47.778682 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:44:47.778693 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:44:47.778705 | orchestrator | 2026-04-11 00:44:47.778718 | orchestrator | 2026-04-11 00:44:47.778730 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:44:47.778743 | orchestrator | Saturday 11 April 2026 00:44:47 +0000 (0:00:00.445) 0:00:08.795 ******** 2026-04-11 00:44:47.778755 | orchestrator | =============================================================================== 2026-04-11 00:44:47.778767 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.51s 2026-04-11 00:44:47.778779 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2026-04-11 00:44:47.778836 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2026-04-11 00:44:47.778849 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-04-11 00:44:59.083388 | orchestrator | 2026-04-11 00:44:59 | INFO  | Prepare task for execution of frr. 2026-04-11 00:44:59.152729 | orchestrator | 2026-04-11 00:44:59 | INFO  | Task 20f98f38-ce00-4bdb-bda6-b2c306c38ce7 (frr) was prepared for execution. 2026-04-11 00:44:59.152863 | orchestrator | 2026-04-11 00:44:59 | INFO  | It takes a moment until task 20f98f38-ce00-4bdb-bda6-b2c306c38ce7 (frr) has been started and output is visible here. 2026-04-11 00:45:21.502250 | orchestrator | 2026-04-11 00:45:21.502356 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-11 00:45:21.502372 | orchestrator | 2026-04-11 00:45:21.502383 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-11 00:45:21.502394 | orchestrator | Saturday 11 April 2026 00:45:02 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-04-11 00:45:21.502404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 00:45:21.502416 | orchestrator | 2026-04-11 00:45:21.502426 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-11 00:45:21.502435 | orchestrator | Saturday 11 April 2026 00:45:02 +0000 (0:00:00.196) 0:00:00.464 ******** 2026-04-11 00:45:21.502445 | orchestrator | changed: [testbed-manager] 2026-04-11 00:45:21.502456 | orchestrator | 2026-04-11 00:45:21.502465 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-11 00:45:21.502475 | orchestrator | Saturday 11 April 2026 00:45:03 +0000 (0:00:01.386) 0:00:01.851 ******** 2026-04-11 00:45:21.502507 | orchestrator | changed: [testbed-manager] 2026-04-11 00:45:21.502518 | orchestrator | 2026-04-11 00:45:21.502527 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-11 00:45:21.502537 | orchestrator | Saturday 11 April 2026 00:45:12 +0000 (0:00:08.379) 0:00:10.230 ******** 2026-04-11 00:45:21.502546 | orchestrator | ok: [testbed-manager] 2026-04-11 00:45:21.502557 | orchestrator | 2026-04-11 00:45:21.502566 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-11 00:45:21.502577 | orchestrator | Saturday 11 April 2026 00:45:12 +0000 (0:00:00.910) 0:00:11.141 ******** 2026-04-11 00:45:21.502587 | orchestrator | changed: [testbed-manager] 2026-04-11 00:45:21.502596 | orchestrator | 2026-04-11 00:45:21.502606 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-11 00:45:21.502616 | orchestrator | Saturday 11 April 2026 00:45:13 +0000 (0:00:00.879) 0:00:12.021 ******** 2026-04-11 00:45:21.502625 | orchestrator | ok: [testbed-manager] 2026-04-11 00:45:21.502635 | orchestrator | 2026-04-11 00:45:21.502644 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-11 00:45:21.502654 | orchestrator | Saturday 11 April 2026 00:45:14 +0000 (0:00:01.098) 0:00:13.120 ******** 2026-04-11 00:45:21.502663 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:45:21.502673 | orchestrator | 2026-04-11 00:45:21.502683 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-11 00:45:21.502692 | orchestrator | Saturday 11 April 2026 00:45:15 +0000 (0:00:00.142) 0:00:13.262 ******** 2026-04-11 00:45:21.502702 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:45:21.502711 | orchestrator | 2026-04-11 00:45:21.502721 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-11 00:45:21.502731 | orchestrator | Saturday 11 April 2026 00:45:15 +0000 (0:00:00.250) 0:00:13.512 ******** 2026-04-11 00:45:21.502740 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:45:21.502773 | orchestrator | 2026-04-11 00:45:21.502783 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-11 00:45:21.502795 | orchestrator | Saturday 11 April 2026 00:45:15 +0000 (0:00:00.149) 0:00:13.662 ******** 2026-04-11 00:45:21.502806 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:45:21.502816 | orchestrator | 2026-04-11 00:45:21.502827 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-11 00:45:21.502838 | orchestrator | Saturday 11 April 2026 00:45:15 +0000 (0:00:00.129) 0:00:13.791 ******** 2026-04-11 00:45:21.502849 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:45:21.502859 | orchestrator | 2026-04-11 00:45:21.502879 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-11 00:45:21.502890 | orchestrator | Saturday 11 April 2026 00:45:15 +0000 (0:00:00.143) 0:00:13.934 ******** 2026-04-11 00:45:21.502901 | orchestrator | changed: [testbed-manager] 2026-04-11 00:45:21.502911 | orchestrator | 2026-04-11 00:45:21.502931 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-11 00:45:21.502942 | orchestrator | Saturday 11 April 2026 00:45:16 +0000 (0:00:00.940) 0:00:14.875 ******** 2026-04-11 00:45:21.502954 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-11 00:45:21.502965 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-11 00:45:21.502977 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-11 00:45:21.502988 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-11 00:45:21.502999 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-11 00:45:21.503010 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-11 00:45:21.503021 | orchestrator | 2026-04-11 00:45:21.503032 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-11 00:45:21.503050 | orchestrator | Saturday 11 April 2026 00:45:18 +0000 (0:00:02.118) 0:00:16.994 ******** 2026-04-11 00:45:21.503061 | orchestrator | ok: [testbed-manager] 2026-04-11 00:45:21.503072 | orchestrator | 2026-04-11 00:45:21.503083 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-11 00:45:21.503094 | orchestrator | Saturday 11 April 2026 00:45:19 +0000 (0:00:01.089) 0:00:18.083 ******** 2026-04-11 00:45:21.503105 | orchestrator | changed: [testbed-manager] 2026-04-11 00:45:21.503116 | orchestrator | 2026-04-11 00:45:21.503127 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:45:21.503137 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 00:45:21.503147 | orchestrator | 2026-04-11 00:45:21.503157 | orchestrator | 2026-04-11 00:45:21.503183 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:45:21.503194 | orchestrator | Saturday 11 April 2026 00:45:21 +0000 (0:00:01.321) 0:00:19.404 ******** 2026-04-11 00:45:21.503204 | orchestrator | =============================================================================== 2026-04-11 00:45:21.503213 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.38s 2026-04-11 00:45:21.503223 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.12s 2026-04-11 00:45:21.503233 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.39s 2026-04-11 00:45:21.503242 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.32s 2026-04-11 00:45:21.503252 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.10s 2026-04-11 00:45:21.503262 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.09s 2026-04-11 00:45:21.503271 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.94s 2026-04-11 00:45:21.503281 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.91s 2026-04-11 00:45:21.503290 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.88s 2026-04-11 00:45:21.503300 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.25s 2026-04-11 00:45:21.503310 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-04-11 00:45:21.503319 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-04-11 00:45:21.503329 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-04-11 00:45:21.503339 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.14s 2026-04-11 00:45:21.503348 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-11 00:45:21.615036 | orchestrator | 2026-04-11 00:45:21.618768 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Apr 11 00:45:21 UTC 2026 2026-04-11 00:45:21.618831 | orchestrator | 2026-04-11 00:45:22.630843 | orchestrator | 2026-04-11 00:45:22 | INFO  | Collection nutshell is prepared for execution 2026-04-11 00:45:22.731602 | orchestrator | 2026-04-11 00:45:22 | INFO  | A [0] - dotfiles 2026-04-11 00:45:32.790931 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [0] - homer 2026-04-11 00:45:32.791127 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [0] - netdata 2026-04-11 00:45:32.791148 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [0] - openstackclient 2026-04-11 00:45:32.791160 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [0] - phpmyadmin 2026-04-11 00:45:32.791183 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [0] - common 2026-04-11 00:45:32.795418 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [1] -- loadbalancer 2026-04-11 00:45:32.795471 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [2] --- opensearch 2026-04-11 00:45:32.795483 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [2] --- mariadb-ng 2026-04-11 00:45:32.795765 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [3] ---- horizon 2026-04-11 00:45:32.795788 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [3] ---- keystone 2026-04-11 00:45:32.796136 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- neutron 2026-04-11 00:45:32.796444 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [5] ------ wait-for-nova 2026-04-11 00:45:32.796467 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [6] ------- octavia 2026-04-11 00:45:32.797975 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- barbican 2026-04-11 00:45:32.798064 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- designate 2026-04-11 00:45:32.798166 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- ironic 2026-04-11 00:45:32.798207 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- placement 2026-04-11 00:45:32.798219 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- magnum 2026-04-11 00:45:32.799674 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [1] -- openvswitch 2026-04-11 00:45:32.800019 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [2] --- ovn 2026-04-11 00:45:32.800442 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [1] -- memcached 2026-04-11 00:45:32.800645 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [1] -- redis 2026-04-11 00:45:32.800927 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [1] -- rabbitmq-ng 2026-04-11 00:45:32.801327 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [0] - kubernetes 2026-04-11 00:45:32.803784 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [1] -- kubeconfig 2026-04-11 00:45:32.804283 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [1] -- copy-kubeconfig 2026-04-11 00:45:32.804591 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [0] - ceph 2026-04-11 00:45:32.807972 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [1] -- ceph-pools 2026-04-11 00:45:32.808014 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [2] --- copy-ceph-keys 2026-04-11 00:45:32.808379 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [3] ---- cephclient 2026-04-11 00:45:32.808473 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-11 00:45:32.808594 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- wait-for-keystone 2026-04-11 00:45:32.809156 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-11 00:45:32.809374 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [5] ------ glance 2026-04-11 00:45:32.809613 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [5] ------ cinder 2026-04-11 00:45:32.809774 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [5] ------ nova 2026-04-11 00:45:32.810833 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [4] ----- prometheus 2026-04-11 00:45:32.810868 | orchestrator | 2026-04-11 00:45:32 | INFO  | A [5] ------ grafana 2026-04-11 00:45:32.996681 | orchestrator | 2026-04-11 00:45:32 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-11 00:45:32.996821 | orchestrator | 2026-04-11 00:45:32 | INFO  | Tasks are running in the background 2026-04-11 00:45:34.861509 | orchestrator | 2026-04-11 00:45:34 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-11 00:45:37.063689 | orchestrator | 2026-04-11 00:45:37 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:45:37.063882 | orchestrator | 2026-04-11 00:45:37 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:45:37.064674 | orchestrator | 2026-04-11 00:45:37 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:45:37.066464 | orchestrator | 2026-04-11 00:45:37 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:45:37.067080 | orchestrator | 2026-04-11 00:45:37 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:45:37.070616 | orchestrator | 2026-04-11 00:45:37 | INFO  | Task 664c9c40-3258-4c61-8fc0-cbe42bd33372 is in state STARTED 2026-04-11 00:45:37.071415 | orchestrator | 2026-04-11 00:45:37 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:45:37.071456 | orchestrator | 2026-04-11 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:45:40.112233 | orchestrator | 2026-04-11 00:45:40 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:45:40.112328 | orchestrator | 2026-04-11 00:45:40 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:45:40.112348 | orchestrator | 2026-04-11 00:45:40 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:45:40.113073 | orchestrator | 2026-04-11 00:45:40 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:45:40.113703 | orchestrator | 2026-04-11 00:45:40 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:45:40.120579 | orchestrator | 2026-04-11 00:45:40 | INFO  | Task 664c9c40-3258-4c61-8fc0-cbe42bd33372 is in state STARTED 2026-04-11 00:45:40.121621 | orchestrator | 2026-04-11 00:45:40 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:45:40.121667 | orchestrator | 2026-04-11 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:45:43.154513 | orchestrator | 2026-04-11 00:45:43 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:45:43.154602 | orchestrator | 2026-04-11 00:45:43 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:45:43.154994 | orchestrator | 2026-04-11 00:45:43 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:45:43.155714 | orchestrator | 2026-04-11 00:45:43 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:45:43.156357 | orchestrator | 2026-04-11 00:45:43 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:45:43.157001 | orchestrator | 2026-04-11 00:45:43 | INFO  | Task 664c9c40-3258-4c61-8fc0-cbe42bd33372 is in state STARTED 2026-04-11 00:45:43.157612 | orchestrator | 2026-04-11 00:45:43 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:45:43.157637 | orchestrator | 2026-04-11 00:45:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:45:46.289880 | orchestrator | 2026-04-11 00:45:46 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:45:46.289940 | orchestrator | 2026-04-11 00:45:46 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:45:46.289951 | orchestrator | 2026-04-11 00:45:46 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:45:46.289961 | orchestrator | 2026-04-11 00:45:46 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:45:46.289967 | orchestrator | 2026-04-11 00:45:46 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:45:46.289980 | orchestrator | 2026-04-11 00:45:46 | INFO  | Task 664c9c40-3258-4c61-8fc0-cbe42bd33372 is in state STARTED 2026-04-11 00:45:46.289986 | orchestrator | 2026-04-11 00:45:46 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:45:46.290009 | orchestrator | 2026-04-11 00:45:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:45:49.438377 | orchestrator | 2026-04-11 00:45:49 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:45:49.442269 | orchestrator | 2026-04-11 00:45:49 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:45:49.442520 | orchestrator | 2026-04-11 00:45:49 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:45:49.443827 | orchestrator | 2026-04-11 00:45:49 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:45:49.445295 | orchestrator | 2026-04-11 00:45:49 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:45:49.445987 | orchestrator | 2026-04-11 00:45:49 | INFO  | Task 664c9c40-3258-4c61-8fc0-cbe42bd33372 is in state STARTED 2026-04-11 00:45:49.447453 | orchestrator | 2026-04-11 00:45:49 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:45:49.447471 | orchestrator | 2026-04-11 00:45:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:45:52.533796 | orchestrator | 2026-04-11 00:45:52 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:45:52.533889 | orchestrator | 2026-04-11 00:45:52 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:45:52.533899 | orchestrator | 2026-04-11 00:45:52 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:45:52.533905 | orchestrator | 2026-04-11 00:45:52 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:45:52.533911 | orchestrator | 2026-04-11 00:45:52 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:45:52.533918 | orchestrator | 2026-04-11 00:45:52 | INFO  | Task 664c9c40-3258-4c61-8fc0-cbe42bd33372 is in state STARTED 2026-04-11 00:45:52.533924 | orchestrator | 2026-04-11 00:45:52 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:45:52.533932 | orchestrator | 2026-04-11 00:45:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:45:55.756155 | orchestrator | 2026-04-11 00:45:55 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:45:55.759875 | orchestrator | 2026-04-11 00:45:55 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:45:55.759970 | orchestrator | 2026-04-11 00:45:55 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:45:55.763820 | orchestrator | 2026-04-11 00:45:55 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:45:55.767820 | orchestrator | 2026-04-11 00:45:55 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:45:55.772093 | orchestrator | 2026-04-11 00:45:55 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:45:55.772352 | orchestrator | 2026-04-11 00:45:55 | INFO  | Task 664c9c40-3258-4c61-8fc0-cbe42bd33372 is in state SUCCESS 2026-04-11 00:45:55.774500 | orchestrator | 2026-04-11 00:45:55.774544 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-11 00:45:55.774558 | orchestrator | 2026-04-11 00:45:55.774570 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-11 00:45:55.774582 | orchestrator | Saturday 11 April 2026 00:45:43 +0000 (0:00:00.561) 0:00:00.561 ******** 2026-04-11 00:45:55.774595 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:45:55.774608 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:45:55.774641 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:45:55.774648 | orchestrator | changed: [testbed-manager] 2026-04-11 00:45:55.774654 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:45:55.774660 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:45:55.774666 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:45:55.774673 | orchestrator | 2026-04-11 00:45:55.774679 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-11 00:45:55.774685 | orchestrator | Saturday 11 April 2026 00:45:47 +0000 (0:00:04.474) 0:00:05.036 ******** 2026-04-11 00:45:55.774692 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-11 00:45:55.774699 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-11 00:45:55.774705 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-11 00:45:55.774739 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-11 00:45:55.774750 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-11 00:45:55.774760 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-11 00:45:55.774771 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-11 00:45:55.774781 | orchestrator | 2026-04-11 00:45:55.774792 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-11 00:45:55.774799 | orchestrator | Saturday 11 April 2026 00:45:49 +0000 (0:00:01.886) 0:00:06.922 ******** 2026-04-11 00:45:55.774810 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-11 00:45:48.205250', 'end': '2026-04-11 00:45:48.210963', 'delta': '0:00:00.005713', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-11 00:45:55.774820 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-11 00:45:48.170937', 'end': '2026-04-11 00:45:49.176816', 'delta': '0:00:01.005879', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-11 00:45:55.774827 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-11 00:45:48.271662', 'end': '2026-04-11 00:45:48.276663', 'delta': '0:00:00.005001', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-11 00:45:55.774858 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-11 00:45:49.183195', 'end': '2026-04-11 00:45:49.190732', 'delta': '0:00:00.007537', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-11 00:45:55.774888 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-11 00:45:48.703981', 'end': '2026-04-11 00:45:48.713852', 'delta': '0:00:00.009871', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-11 00:45:55.774901 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-11 00:45:49.266050', 'end': '2026-04-11 00:45:49.274533', 'delta': '0:00:00.008483', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-11 00:45:55.774911 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-11 00:45:48.415108', 'end': '2026-04-11 00:45:48.419503', 'delta': '0:00:00.004395', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-11 00:45:55.774921 | orchestrator | 2026-04-11 00:45:55.774931 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-11 00:45:55.774940 | orchestrator | Saturday 11 April 2026 00:45:50 +0000 (0:00:01.404) 0:00:08.326 ******** 2026-04-11 00:45:55.774951 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-11 00:45:55.774961 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-11 00:45:55.774971 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-11 00:45:55.774982 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-11 00:45:55.774993 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-11 00:45:55.775003 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-11 00:45:55.775013 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-11 00:45:55.775029 | orchestrator | 2026-04-11 00:45:55.775039 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-11 00:45:55.775051 | orchestrator | Saturday 11 April 2026 00:45:51 +0000 (0:00:01.119) 0:00:09.446 ******** 2026-04-11 00:45:55.775060 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-11 00:45:55.775071 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-11 00:45:55.775078 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-11 00:45:55.775084 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-11 00:45:55.775091 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-11 00:45:55.775097 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-11 00:45:55.775103 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-11 00:45:55.775109 | orchestrator | 2026-04-11 00:45:55.775116 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:45:55.775148 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:45:55.775157 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:45:55.775164 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:45:55.775170 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:45:55.775177 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:45:55.775183 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:45:55.775189 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:45:55.775196 | orchestrator | 2026-04-11 00:45:55.775202 | orchestrator | 2026-04-11 00:45:55.775208 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:45:55.775215 | orchestrator | Saturday 11 April 2026 00:45:54 +0000 (0:00:02.058) 0:00:11.504 ******** 2026-04-11 00:45:55.775221 | orchestrator | =============================================================================== 2026-04-11 00:45:55.775235 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.47s 2026-04-11 00:45:55.775241 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.06s 2026-04-11 00:45:55.775247 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.89s 2026-04-11 00:45:55.775254 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.40s 2026-04-11 00:45:55.775260 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.12s 2026-04-11 00:45:55.776896 | orchestrator | 2026-04-11 00:45:55 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:45:55.776945 | orchestrator | 2026-04-11 00:45:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:45:58.914179 | orchestrator | 2026-04-11 00:45:58 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:45:58.914444 | orchestrator | 2026-04-11 00:45:58 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:45:58.915417 | orchestrator | 2026-04-11 00:45:58 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:45:58.934430 | orchestrator | 2026-04-11 00:45:58 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:45:58.934521 | orchestrator | 2026-04-11 00:45:58 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:45:58.934527 | orchestrator | 2026-04-11 00:45:58 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:45:58.934532 | orchestrator | 2026-04-11 00:45:58 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:45:58.934537 | orchestrator | 2026-04-11 00:45:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:02.034140 | orchestrator | 2026-04-11 00:46:02 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:46:02.037209 | orchestrator | 2026-04-11 00:46:02 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:02.041994 | orchestrator | 2026-04-11 00:46:02 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:02.043029 | orchestrator | 2026-04-11 00:46:02 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:02.044960 | orchestrator | 2026-04-11 00:46:02 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:02.046901 | orchestrator | 2026-04-11 00:46:02 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:02.048468 | orchestrator | 2026-04-11 00:46:02 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:02.048987 | orchestrator | 2026-04-11 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:05.254763 | orchestrator | 2026-04-11 00:46:05 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:46:05.254878 | orchestrator | 2026-04-11 00:46:05 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:05.254889 | orchestrator | 2026-04-11 00:46:05 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:05.254897 | orchestrator | 2026-04-11 00:46:05 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:05.254903 | orchestrator | 2026-04-11 00:46:05 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:05.255122 | orchestrator | 2026-04-11 00:46:05 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:05.255132 | orchestrator | 2026-04-11 00:46:05 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:05.255139 | orchestrator | 2026-04-11 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:08.142234 | orchestrator | 2026-04-11 00:46:08 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:46:08.144365 | orchestrator | 2026-04-11 00:46:08 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:08.145555 | orchestrator | 2026-04-11 00:46:08 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:08.145581 | orchestrator | 2026-04-11 00:46:08 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:08.146505 | orchestrator | 2026-04-11 00:46:08 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:08.147022 | orchestrator | 2026-04-11 00:46:08 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:08.147889 | orchestrator | 2026-04-11 00:46:08 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:08.147914 | orchestrator | 2026-04-11 00:46:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:11.189166 | orchestrator | 2026-04-11 00:46:11 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:46:11.189250 | orchestrator | 2026-04-11 00:46:11 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:11.189617 | orchestrator | 2026-04-11 00:46:11 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:11.190580 | orchestrator | 2026-04-11 00:46:11 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:11.191323 | orchestrator | 2026-04-11 00:46:11 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:11.192267 | orchestrator | 2026-04-11 00:46:11 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:11.194421 | orchestrator | 2026-04-11 00:46:11 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:11.194465 | orchestrator | 2026-04-11 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:14.272430 | orchestrator | 2026-04-11 00:46:14 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:46:14.272518 | orchestrator | 2026-04-11 00:46:14 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:14.272529 | orchestrator | 2026-04-11 00:46:14 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:14.272537 | orchestrator | 2026-04-11 00:46:14 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:14.272543 | orchestrator | 2026-04-11 00:46:14 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:14.272550 | orchestrator | 2026-04-11 00:46:14 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:14.272556 | orchestrator | 2026-04-11 00:46:14 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:14.272563 | orchestrator | 2026-04-11 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:17.305400 | orchestrator | 2026-04-11 00:46:17 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:46:17.305495 | orchestrator | 2026-04-11 00:46:17 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:17.308623 | orchestrator | 2026-04-11 00:46:17 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:17.308749 | orchestrator | 2026-04-11 00:46:17 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:17.309455 | orchestrator | 2026-04-11 00:46:17 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:17.309879 | orchestrator | 2026-04-11 00:46:17 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:17.312326 | orchestrator | 2026-04-11 00:46:17 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:17.312370 | orchestrator | 2026-04-11 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:20.443158 | orchestrator | 2026-04-11 00:46:20 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state STARTED 2026-04-11 00:46:20.443211 | orchestrator | 2026-04-11 00:46:20 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:20.443227 | orchestrator | 2026-04-11 00:46:20 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:20.443232 | orchestrator | 2026-04-11 00:46:20 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:20.443251 | orchestrator | 2026-04-11 00:46:20 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:20.443257 | orchestrator | 2026-04-11 00:46:20 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:20.443261 | orchestrator | 2026-04-11 00:46:20 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:20.443266 | orchestrator | 2026-04-11 00:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:23.479335 | orchestrator | 2026-04-11 00:46:23 | INFO  | Task feb5e8c0-481f-4e4c-b959-083b684ae32f is in state SUCCESS 2026-04-11 00:46:23.479409 | orchestrator | 2026-04-11 00:46:23 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:23.479421 | orchestrator | 2026-04-11 00:46:23 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:23.479428 | orchestrator | 2026-04-11 00:46:23 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:23.479435 | orchestrator | 2026-04-11 00:46:23 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:23.479440 | orchestrator | 2026-04-11 00:46:23 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:23.479446 | orchestrator | 2026-04-11 00:46:23 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:23.479453 | orchestrator | 2026-04-11 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:26.570140 | orchestrator | 2026-04-11 00:46:26 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:26.570189 | orchestrator | 2026-04-11 00:46:26 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:26.570197 | orchestrator | 2026-04-11 00:46:26 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:26.570201 | orchestrator | 2026-04-11 00:46:26 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:26.570205 | orchestrator | 2026-04-11 00:46:26 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:26.570209 | orchestrator | 2026-04-11 00:46:26 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:26.570213 | orchestrator | 2026-04-11 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:29.853147 | orchestrator | 2026-04-11 00:46:29 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:29.853259 | orchestrator | 2026-04-11 00:46:29 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:29.853287 | orchestrator | 2026-04-11 00:46:29 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:29.853312 | orchestrator | 2026-04-11 00:46:29 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:29.853334 | orchestrator | 2026-04-11 00:46:29 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:29.853353 | orchestrator | 2026-04-11 00:46:29 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:29.853370 | orchestrator | 2026-04-11 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:32.673652 | orchestrator | 2026-04-11 00:46:32 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:32.676434 | orchestrator | 2026-04-11 00:46:32 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:32.679359 | orchestrator | 2026-04-11 00:46:32 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:32.681173 | orchestrator | 2026-04-11 00:46:32 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:32.683164 | orchestrator | 2026-04-11 00:46:32 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state STARTED 2026-04-11 00:46:32.683914 | orchestrator | 2026-04-11 00:46:32 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:32.683949 | orchestrator | 2026-04-11 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:35.735768 | orchestrator | 2026-04-11 00:46:35 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:35.738491 | orchestrator | 2026-04-11 00:46:35 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:35.740905 | orchestrator | 2026-04-11 00:46:35 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:35.741985 | orchestrator | 2026-04-11 00:46:35 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:35.742663 | orchestrator | 2026-04-11 00:46:35 | INFO  | Task 78d7a501-ef29-43d2-903d-7cb5779d97f8 is in state SUCCESS 2026-04-11 00:46:35.746470 | orchestrator | 2026-04-11 00:46:35 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:35.746596 | orchestrator | 2026-04-11 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:38.832014 | orchestrator | 2026-04-11 00:46:38 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:38.835773 | orchestrator | 2026-04-11 00:46:38 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:38.840512 | orchestrator | 2026-04-11 00:46:38 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:38.844212 | orchestrator | 2026-04-11 00:46:38 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:38.847112 | orchestrator | 2026-04-11 00:46:38 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:38.847301 | orchestrator | 2026-04-11 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:41.878278 | orchestrator | 2026-04-11 00:46:41 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:41.878361 | orchestrator | 2026-04-11 00:46:41 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:41.878575 | orchestrator | 2026-04-11 00:46:41 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:41.879567 | orchestrator | 2026-04-11 00:46:41 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:41.881186 | orchestrator | 2026-04-11 00:46:41 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:41.882857 | orchestrator | 2026-04-11 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:44.940864 | orchestrator | 2026-04-11 00:46:44 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:44.940939 | orchestrator | 2026-04-11 00:46:44 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:44.943936 | orchestrator | 2026-04-11 00:46:44 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:44.944069 | orchestrator | 2026-04-11 00:46:44 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:44.945804 | orchestrator | 2026-04-11 00:46:44 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:44.945839 | orchestrator | 2026-04-11 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:47.984141 | orchestrator | 2026-04-11 00:46:47 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state STARTED 2026-04-11 00:46:47.984232 | orchestrator | 2026-04-11 00:46:47 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:47.984240 | orchestrator | 2026-04-11 00:46:47 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:47.984245 | orchestrator | 2026-04-11 00:46:47 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:47.984250 | orchestrator | 2026-04-11 00:46:47 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:47.984255 | orchestrator | 2026-04-11 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:51.009837 | orchestrator | 2026-04-11 00:46:51 | INFO  | Task cffa15e3-1368-445e-a900-4a29953c7d32 is in state SUCCESS 2026-04-11 00:46:51.011521 | orchestrator | 2026-04-11 00:46:51.011567 | orchestrator | 2026-04-11 00:46:51.011609 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-11 00:46:51.011619 | orchestrator | 2026-04-11 00:46:51.011625 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-11 00:46:51.011631 | orchestrator | Saturday 11 April 2026 00:45:42 +0000 (0:00:00.476) 0:00:00.476 ******** 2026-04-11 00:46:51.011636 | orchestrator | ok: [testbed-manager] => { 2026-04-11 00:46:51.011643 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-11 00:46:51.011650 | orchestrator | } 2026-04-11 00:46:51.011670 | orchestrator | 2026-04-11 00:46:51.011681 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-11 00:46:51.011687 | orchestrator | Saturday 11 April 2026 00:45:42 +0000 (0:00:00.288) 0:00:00.765 ******** 2026-04-11 00:46:51.011693 | orchestrator | ok: [testbed-manager] 2026-04-11 00:46:51.011699 | orchestrator | 2026-04-11 00:46:51.011704 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-11 00:46:51.011710 | orchestrator | Saturday 11 April 2026 00:45:44 +0000 (0:00:02.296) 0:00:03.062 ******** 2026-04-11 00:46:51.011715 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-11 00:46:51.011721 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-11 00:46:51.011726 | orchestrator | 2026-04-11 00:46:51.011732 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-11 00:46:51.011738 | orchestrator | Saturday 11 April 2026 00:45:46 +0000 (0:00:01.596) 0:00:04.658 ******** 2026-04-11 00:46:51.011743 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.011749 | orchestrator | 2026-04-11 00:46:51.011755 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-11 00:46:51.011760 | orchestrator | Saturday 11 April 2026 00:45:48 +0000 (0:00:02.371) 0:00:07.029 ******** 2026-04-11 00:46:51.011766 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.011771 | orchestrator | 2026-04-11 00:46:51.011777 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-11 00:46:51.011782 | orchestrator | Saturday 11 April 2026 00:45:51 +0000 (0:00:02.921) 0:00:09.951 ******** 2026-04-11 00:46:51.011788 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-11 00:46:51.012078 | orchestrator | ok: [testbed-manager] 2026-04-11 00:46:51.012610 | orchestrator | 2026-04-11 00:46:51.012620 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-11 00:46:51.012626 | orchestrator | Saturday 11 April 2026 00:46:18 +0000 (0:00:26.382) 0:00:36.334 ******** 2026-04-11 00:46:51.012632 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.012650 | orchestrator | 2026-04-11 00:46:51.012669 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:46:51.012674 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:46:51.012680 | orchestrator | 2026-04-11 00:46:51.012686 | orchestrator | 2026-04-11 00:46:51.012691 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:46:51.012696 | orchestrator | Saturday 11 April 2026 00:46:22 +0000 (0:00:04.129) 0:00:40.464 ******** 2026-04-11 00:46:51.012701 | orchestrator | =============================================================================== 2026-04-11 00:46:51.012707 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.38s 2026-04-11 00:46:51.012712 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.13s 2026-04-11 00:46:51.012717 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.92s 2026-04-11 00:46:51.012723 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.37s 2026-04-11 00:46:51.012728 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.30s 2026-04-11 00:46:51.012734 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.60s 2026-04-11 00:46:51.012739 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.29s 2026-04-11 00:46:51.012745 | orchestrator | 2026-04-11 00:46:51.012750 | orchestrator | 2026-04-11 00:46:51.012756 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-11 00:46:51.012761 | orchestrator | 2026-04-11 00:46:51.012767 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-11 00:46:51.012772 | orchestrator | Saturday 11 April 2026 00:45:42 +0000 (0:00:00.763) 0:00:00.763 ******** 2026-04-11 00:46:51.012778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-11 00:46:51.012784 | orchestrator | 2026-04-11 00:46:51.012789 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-11 00:46:51.012794 | orchestrator | Saturday 11 April 2026 00:45:43 +0000 (0:00:00.639) 0:00:01.403 ******** 2026-04-11 00:46:51.012798 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-11 00:46:51.012804 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-11 00:46:51.012809 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-11 00:46:51.012815 | orchestrator | 2026-04-11 00:46:51.012820 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-11 00:46:51.012825 | orchestrator | Saturday 11 April 2026 00:45:47 +0000 (0:00:04.414) 0:00:05.817 ******** 2026-04-11 00:46:51.012830 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.012835 | orchestrator | 2026-04-11 00:46:51.012839 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-11 00:46:51.012844 | orchestrator | Saturday 11 April 2026 00:45:50 +0000 (0:00:02.709) 0:00:08.527 ******** 2026-04-11 00:46:51.012881 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-11 00:46:51.012890 | orchestrator | ok: [testbed-manager] 2026-04-11 00:46:51.012895 | orchestrator | 2026-04-11 00:46:51.012901 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-11 00:46:51.012906 | orchestrator | Saturday 11 April 2026 00:46:24 +0000 (0:00:33.445) 0:00:41.973 ******** 2026-04-11 00:46:51.012943 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.012949 | orchestrator | 2026-04-11 00:46:51.012954 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-11 00:46:51.012958 | orchestrator | Saturday 11 April 2026 00:46:25 +0000 (0:00:01.591) 0:00:43.565 ******** 2026-04-11 00:46:51.012963 | orchestrator | ok: [testbed-manager] 2026-04-11 00:46:51.012968 | orchestrator | 2026-04-11 00:46:51.012986 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-11 00:46:51.012992 | orchestrator | Saturday 11 April 2026 00:46:26 +0000 (0:00:00.632) 0:00:44.197 ******** 2026-04-11 00:46:51.012997 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.013002 | orchestrator | 2026-04-11 00:46:51.013007 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-11 00:46:51.013012 | orchestrator | Saturday 11 April 2026 00:46:29 +0000 (0:00:03.196) 0:00:47.394 ******** 2026-04-11 00:46:51.013018 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.013023 | orchestrator | 2026-04-11 00:46:51.013028 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-11 00:46:51.013033 | orchestrator | Saturday 11 April 2026 00:46:30 +0000 (0:00:01.318) 0:00:48.712 ******** 2026-04-11 00:46:51.013039 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.013044 | orchestrator | 2026-04-11 00:46:51.013050 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-11 00:46:51.013055 | orchestrator | Saturday 11 April 2026 00:46:31 +0000 (0:00:00.731) 0:00:49.444 ******** 2026-04-11 00:46:51.013061 | orchestrator | ok: [testbed-manager] 2026-04-11 00:46:51.013066 | orchestrator | 2026-04-11 00:46:51.013071 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:46:51.013076 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:46:51.013081 | orchestrator | 2026-04-11 00:46:51.013086 | orchestrator | 2026-04-11 00:46:51.013091 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:46:51.013096 | orchestrator | Saturday 11 April 2026 00:46:32 +0000 (0:00:00.798) 0:00:50.242 ******** 2026-04-11 00:46:51.013102 | orchestrator | =============================================================================== 2026-04-11 00:46:51.013107 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.45s 2026-04-11 00:46:51.013112 | orchestrator | osism.services.openstackclient : Create required directories ------------ 4.41s 2026-04-11 00:46:51.013118 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.20s 2026-04-11 00:46:51.013123 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.71s 2026-04-11 00:46:51.013129 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.59s 2026-04-11 00:46:51.013135 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.32s 2026-04-11 00:46:51.013140 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.80s 2026-04-11 00:46:51.013146 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.73s 2026-04-11 00:46:51.013151 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.64s 2026-04-11 00:46:51.013157 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.63s 2026-04-11 00:46:51.013162 | orchestrator | 2026-04-11 00:46:51.013167 | orchestrator | 2026-04-11 00:46:51.013173 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-11 00:46:51.013178 | orchestrator | 2026-04-11 00:46:51.013183 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-11 00:46:51.013189 | orchestrator | Saturday 11 April 2026 00:45:36 +0000 (0:00:00.383) 0:00:00.383 ******** 2026-04-11 00:46:51.013194 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:46:51.013200 | orchestrator | 2026-04-11 00:46:51.013205 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-11 00:46:51.013211 | orchestrator | Saturday 11 April 2026 00:45:38 +0000 (0:00:01.386) 0:00:01.770 ******** 2026-04-11 00:46:51.013216 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 00:46:51.013222 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 00:46:51.013234 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 00:46:51.013240 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 00:46:51.013245 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 00:46:51.013250 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 00:46:51.013256 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 00:46:51.013261 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 00:46:51.013267 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 00:46:51.013272 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 00:46:51.013278 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 00:46:51.013308 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 00:46:51.013314 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 00:46:51.013320 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 00:46:51.013325 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 00:46:51.013331 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 00:46:51.013336 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 00:46:51.013342 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 00:46:51.013347 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 00:46:51.013353 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 00:46:51.013359 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 00:46:51.013364 | orchestrator | 2026-04-11 00:46:51.013370 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-11 00:46:51.013375 | orchestrator | Saturday 11 April 2026 00:45:41 +0000 (0:00:03.792) 0:00:05.562 ******** 2026-04-11 00:46:51.013381 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-04-11 00:46:51.013387 | orchestrator | 2026-04-11 00:46:51.013392 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-11 00:46:51.013397 | orchestrator | Saturday 11 April 2026 00:45:43 +0000 (0:00:01.415) 0:00:06.977 ******** 2026-04-11 00:46:51.013406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.013414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.013426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.013448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.013483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.013492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.013498 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.013503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013599 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013650 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013770 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.013780 | orchestrator | 2026-04-11 00:46:51.013786 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-11 00:46:51.013793 | orchestrator | Saturday 11 April 2026 00:45:49 +0000 (0:00:06.239) 0:00:13.217 ******** 2026-04-11 00:46:51.013799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.013806 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013818 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013824 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:46:51.013830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.013837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.013872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.013887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.013897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013912 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:46:51.013919 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:46:51.013926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.013965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.013971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.013989 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:46:51.013995 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:46:51.014002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014062 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:46:51.014068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014091 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:46:51.014098 | orchestrator | 2026-04-11 00:46:51.014103 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-11 00:46:51.014109 | orchestrator | Saturday 11 April 2026 00:45:51 +0000 (0:00:02.337) 0:00:15.554 ******** 2026-04-11 00:46:51.014118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.014124 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014134 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014140 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:46:51.014146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.014152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014162 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:46:51.014167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.014190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.014197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014221 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:46:51.014226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.014232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014244 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:46:51.014266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.014285 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:46:51.014290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.014303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014308 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:46:51.014314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.014325 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:46:51.014331 | orchestrator | 2026-04-11 00:46:51.014336 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-11 00:46:51.014342 | orchestrator | Saturday 11 April 2026 00:45:56 +0000 (0:00:04.343) 0:00:19.897 ******** 2026-04-11 00:46:51.014348 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:46:51.014354 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:46:51.014359 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:46:51.014364 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:46:51.014370 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:46:51.014379 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:46:51.014384 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:46:51.014390 | orchestrator | 2026-04-11 00:46:51.014411 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-11 00:46:51.014418 | orchestrator | Saturday 11 April 2026 00:45:57 +0000 (0:00:01.595) 0:00:21.492 ******** 2026-04-11 00:46:51.014423 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:46:51.014428 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:46:51.014433 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:46:51.014437 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:46:51.014442 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:46:51.014447 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:46:51.014452 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:46:51.014457 | orchestrator | 2026-04-11 00:46:51.014462 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-11 00:46:51.014469 | orchestrator | Saturday 11 April 2026 00:45:59 +0000 (0:00:01.132) 0:00:22.625 ******** 2026-04-11 00:46:51.014475 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:46:51.014480 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:46:51.014485 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:46:51.014490 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:46:51.014496 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:46:51.014501 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:46:51.014507 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:46:51.014512 | orchestrator | 2026-04-11 00:46:51.014517 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-11 00:46:51.014522 | orchestrator | Saturday 11 April 2026 00:46:00 +0000 (0:00:01.664) 0:00:24.290 ******** 2026-04-11 00:46:51.014528 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:46:51.014533 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:46:51.014538 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:46:51.014544 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:46:51.014549 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:46:51.014554 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.014559 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:46:51.014565 | orchestrator | 2026-04-11 00:46:51.014570 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-11 00:46:51.014576 | orchestrator | Saturday 11 April 2026 00:46:02 +0000 (0:00:02.043) 0:00:26.333 ******** 2026-04-11 00:46:51.014582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.014588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.014594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.014604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.014631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.014640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.014719 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.014730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014802 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014837 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.014843 | orchestrator | 2026-04-11 00:46:51.014851 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-11 00:46:51.014857 | orchestrator | Saturday 11 April 2026 00:46:08 +0000 (0:00:05.356) 0:00:31.690 ******** 2026-04-11 00:46:51.014863 | orchestrator | [WARNING]: Skipped 2026-04-11 00:46:51.014870 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-11 00:46:51.014876 | orchestrator | to this access issue: 2026-04-11 00:46:51.014881 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-11 00:46:51.014886 | orchestrator | directory 2026-04-11 00:46:51.014892 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 00:46:51.014898 | orchestrator | 2026-04-11 00:46:51.014904 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-11 00:46:51.014909 | orchestrator | Saturday 11 April 2026 00:46:09 +0000 (0:00:01.259) 0:00:32.949 ******** 2026-04-11 00:46:51.014914 | orchestrator | [WARNING]: Skipped 2026-04-11 00:46:51.014920 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-11 00:46:51.014925 | orchestrator | to this access issue: 2026-04-11 00:46:51.014931 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-11 00:46:51.014936 | orchestrator | directory 2026-04-11 00:46:51.014941 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 00:46:51.014947 | orchestrator | 2026-04-11 00:46:51.014953 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-11 00:46:51.014958 | orchestrator | Saturday 11 April 2026 00:46:10 +0000 (0:00:00.903) 0:00:33.853 ******** 2026-04-11 00:46:51.014964 | orchestrator | [WARNING]: Skipped 2026-04-11 00:46:51.014970 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-11 00:46:51.014981 | orchestrator | to this access issue: 2026-04-11 00:46:51.014986 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-11 00:46:51.014992 | orchestrator | directory 2026-04-11 00:46:51.014998 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 00:46:51.015003 | orchestrator | 2026-04-11 00:46:51.015009 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-11 00:46:51.015014 | orchestrator | Saturday 11 April 2026 00:46:11 +0000 (0:00:00.857) 0:00:34.711 ******** 2026-04-11 00:46:51.015020 | orchestrator | [WARNING]: Skipped 2026-04-11 00:46:51.015025 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-11 00:46:51.015031 | orchestrator | to this access issue: 2026-04-11 00:46:51.015037 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-11 00:46:51.015043 | orchestrator | directory 2026-04-11 00:46:51.015048 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 00:46:51.015053 | orchestrator | 2026-04-11 00:46:51.015059 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-11 00:46:51.015065 | orchestrator | Saturday 11 April 2026 00:46:12 +0000 (0:00:01.332) 0:00:36.043 ******** 2026-04-11 00:46:51.015070 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:46:51.015076 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.015082 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:46:51.015087 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:46:51.015092 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:46:51.015098 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:46:51.015104 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:46:51.015109 | orchestrator | 2026-04-11 00:46:51.015115 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-11 00:46:51.015120 | orchestrator | Saturday 11 April 2026 00:46:17 +0000 (0:00:05.560) 0:00:41.604 ******** 2026-04-11 00:46:51.015125 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 00:46:51.015131 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 00:46:51.015137 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 00:46:51.015143 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 00:46:51.015148 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 00:46:51.015154 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 00:46:51.015159 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 00:46:51.015165 | orchestrator | 2026-04-11 00:46:51.015170 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-11 00:46:51.015176 | orchestrator | Saturday 11 April 2026 00:46:22 +0000 (0:00:04.807) 0:00:46.411 ******** 2026-04-11 00:46:51.015182 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:46:51.015187 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.015192 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:46:51.015198 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:46:51.015203 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:46:51.015208 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:46:51.015219 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:46:51.015225 | orchestrator | 2026-04-11 00:46:51.015231 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-11 00:46:51.015236 | orchestrator | Saturday 11 April 2026 00:46:25 +0000 (0:00:03.089) 0:00:49.500 ******** 2026-04-11 00:46:51.015245 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015264 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015270 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015287 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015306 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015318 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015332 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015338 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015363 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015369 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015374 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015385 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015390 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015396 | orchestrator | 2026-04-11 00:46:51.015401 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-11 00:46:51.015407 | orchestrator | Saturday 11 April 2026 00:46:28 +0000 (0:00:02.617) 0:00:52.118 ******** 2026-04-11 00:46:51.015412 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 00:46:51.015417 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 00:46:51.015422 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 00:46:51.015431 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 00:46:51.015437 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 00:46:51.015441 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 00:46:51.015449 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 00:46:51.015455 | orchestrator | 2026-04-11 00:46:51.015460 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-11 00:46:51.015466 | orchestrator | Saturday 11 April 2026 00:46:31 +0000 (0:00:03.173) 0:00:55.291 ******** 2026-04-11 00:46:51.015471 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 00:46:51.015476 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 00:46:51.015481 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 00:46:51.015489 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 00:46:51.015494 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 00:46:51.015499 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 00:46:51.015505 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 00:46:51.015510 | orchestrator | 2026-04-11 00:46:51.015515 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-11 00:46:51.015521 | orchestrator | Saturday 11 April 2026 00:46:35 +0000 (0:00:04.305) 0:00:59.597 ******** 2026-04-11 00:46:51.015526 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015600 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015646 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 00:46:51.015694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:46:51.015715 | orchestrator | 2026-04-11 00:46:51.015720 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-11 00:46:51.015726 | orchestrator | Saturday 11 April 2026 00:46:41 +0000 (0:00:05.499) 0:01:05.096 ******** 2026-04-11 00:46:51.015731 | orchestrator | changed: [testbed-manager] => { 2026-04-11 00:46:51.015737 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:46:51.015742 | orchestrator | } 2026-04-11 00:46:51.015747 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:46:51.015752 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:46:51.015758 | orchestrator | } 2026-04-11 00:46:51.015763 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:46:51.015768 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:46:51.015777 | orchestrator | } 2026-04-11 00:46:51.015783 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:46:51.015788 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:46:51.015793 | orchestrator | } 2026-04-11 00:46:51.015798 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 00:46:51.015804 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:46:51.015809 | orchestrator | } 2026-04-11 00:46:51.015815 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 00:46:51.015820 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:46:51.015825 | orchestrator | } 2026-04-11 00:46:51.015830 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 00:46:51.015836 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:46:51.015841 | orchestrator | } 2026-04-11 00:46:51.015846 | orchestrator | 2026-04-11 00:46:51.015852 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:46:51.015857 | orchestrator | Saturday 11 April 2026 00:46:42 +0000 (0:00:00.883) 0:01:05.979 ******** 2026-04-11 00:46:51.015863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.015869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.015892 | orchestrator | skipping: [testbed-manager] 2026-04-11 00:46:51.015897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015913 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:46:51.015922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.015927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.015949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015960 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:46:51.015969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.015977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.015988 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:46:51.015994 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:46:51.016003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.016009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.016014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.016020 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:46:51.016026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 00:46:51.016032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.016041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:46:51.016047 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:46:51.016053 | orchestrator | 2026-04-11 00:46:51.016059 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-11 00:46:51.016064 | orchestrator | Saturday 11 April 2026 00:46:44 +0000 (0:00:02.103) 0:01:08.083 ******** 2026-04-11 00:46:51.016070 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.016076 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:46:51.016081 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:46:51.016087 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:46:51.016092 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:46:51.016101 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:46:51.016107 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:46:51.016113 | orchestrator | 2026-04-11 00:46:51.016119 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-11 00:46:51.016124 | orchestrator | Saturday 11 April 2026 00:46:45 +0000 (0:00:01.428) 0:01:09.511 ******** 2026-04-11 00:46:51.016130 | orchestrator | changed: [testbed-manager] 2026-04-11 00:46:51.016135 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:46:51.016141 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:46:51.016146 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:46:51.016151 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:46:51.016156 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:46:51.016161 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:46:51.016166 | orchestrator | 2026-04-11 00:46:51.016172 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 00:46:51.016177 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:01.364) 0:01:10.876 ******** 2026-04-11 00:46:51.016183 | orchestrator | 2026-04-11 00:46:51.016189 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 00:46:51.016194 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:00.061) 0:01:10.938 ******** 2026-04-11 00:46:51.016199 | orchestrator | 2026-04-11 00:46:51.016205 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 00:46:51.016210 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:00.068) 0:01:11.007 ******** 2026-04-11 00:46:51.016215 | orchestrator | 2026-04-11 00:46:51.016220 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 00:46:51.016224 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:00.057) 0:01:11.064 ******** 2026-04-11 00:46:51.016229 | orchestrator | 2026-04-11 00:46:51.016234 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 00:46:51.016240 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:00.060) 0:01:11.125 ******** 2026-04-11 00:46:51.016245 | orchestrator | 2026-04-11 00:46:51.016250 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 00:46:51.016255 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:00.060) 0:01:11.185 ******** 2026-04-11 00:46:51.016260 | orchestrator | 2026-04-11 00:46:51.016266 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 00:46:51.016271 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:00.060) 0:01:11.245 ******** 2026-04-11 00:46:51.016276 | orchestrator | 2026-04-11 00:46:51.016282 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-11 00:46:51.016287 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:00.080) 0:01:11.326 ******** 2026-04-11 00:46:51.016307 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1_kf7bmw/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1_kf7bmw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_1_kf7bmw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1_kf7bmw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-11 00:46:51.016320 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6cmcwsto/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6cmcwsto/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_6cmcwsto/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6cmcwsto/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-11 00:46:51.016333 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3mj2fflh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3mj2fflh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_3mj2fflh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3mj2fflh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-11 00:46:51.016347 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1ef9wpov/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1ef9wpov/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_1ef9wpov/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1ef9wpov/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-11 00:46:51.016360 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jhrib8b_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jhrib8b_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_jhrib8b_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jhrib8b_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-11 00:46:51.016372 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_f3_6qn06/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_f3_6qn06/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_f3_6qn06/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_f3_6qn06/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-11 00:46:51.016382 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_v61ddxr0/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_v61ddxr0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_v61ddxr0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_v61ddxr0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-11 00:46:51.016392 | orchestrator | 2026-04-11 00:46:51.016397 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:46:51.016402 | orchestrator | testbed-manager : ok=20  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-11 00:46:51.016407 | orchestrator | testbed-node-0 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-11 00:46:51.016414 | orchestrator | testbed-node-1 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-11 00:46:51.016419 | orchestrator | testbed-node-2 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-11 00:46:51.016424 | orchestrator | testbed-node-3 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-11 00:46:51.016431 | orchestrator | testbed-node-4 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-11 00:46:51.016436 | orchestrator | testbed-node-5 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-11 00:46:51.016441 | orchestrator | 2026-04-11 00:46:51.016446 | orchestrator | 2026-04-11 00:46:51.016451 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:46:51.016456 | orchestrator | Saturday 11 April 2026 00:46:50 +0000 (0:00:02.738) 0:01:14.065 ******** 2026-04-11 00:46:51.016461 | orchestrator | =============================================================================== 2026-04-11 00:46:51.016466 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.24s 2026-04-11 00:46:51.016471 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.56s 2026-04-11 00:46:51.016476 | orchestrator | service-check-containers : common | Check containers -------------------- 5.50s 2026-04-11 00:46:51.016482 | orchestrator | common : Copying over config.json files for services -------------------- 5.36s 2026-04-11 00:46:51.016488 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.81s 2026-04-11 00:46:51.016492 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.34s 2026-04-11 00:46:51.016497 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 4.31s 2026-04-11 00:46:51.016502 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.79s 2026-04-11 00:46:51.016507 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.17s 2026-04-11 00:46:51.016512 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.09s 2026-04-11 00:46:51.016517 | orchestrator | common : Restart fluentd container -------------------------------------- 2.74s 2026-04-11 00:46:51.016522 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.62s 2026-04-11 00:46:51.016527 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.34s 2026-04-11 00:46:51.016532 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.10s 2026-04-11 00:46:51.016537 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.04s 2026-04-11 00:46:51.016542 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.66s 2026-04-11 00:46:51.016547 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 1.60s 2026-04-11 00:46:51.016552 | orchestrator | common : Creating log volume -------------------------------------------- 1.43s 2026-04-11 00:46:51.016557 | orchestrator | common : include_tasks -------------------------------------------------- 1.42s 2026-04-11 00:46:51.016567 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-04-11 00:46:51.016572 | orchestrator | 2026-04-11 00:46:51 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:51.016577 | orchestrator | 2026-04-11 00:46:51 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:51.016582 | orchestrator | 2026-04-11 00:46:51 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:51.016588 | orchestrator | 2026-04-11 00:46:51 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:51.016593 | orchestrator | 2026-04-11 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:54.048495 | orchestrator | 2026-04-11 00:46:54 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:46:54.054765 | orchestrator | 2026-04-11 00:46:54 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:46:54.056466 | orchestrator | 2026-04-11 00:46:54 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:54.057169 | orchestrator | 2026-04-11 00:46:54 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:54.057920 | orchestrator | 2026-04-11 00:46:54 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:54.058683 | orchestrator | 2026-04-11 00:46:54 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:54.059347 | orchestrator | 2026-04-11 00:46:54 | INFO  | Task 544eb7d9-b1ea-4ec2-91fd-97e765db50a1 is in state STARTED 2026-04-11 00:46:54.059970 | orchestrator | 2026-04-11 00:46:54 | INFO  | Task 2206daab-52a0-421a-85ca-982db2e5f0bb is in state STARTED 2026-04-11 00:46:54.060718 | orchestrator | 2026-04-11 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:46:57.083332 | orchestrator | 2026-04-11 00:46:57 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:46:57.083703 | orchestrator | 2026-04-11 00:46:57 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:46:57.084339 | orchestrator | 2026-04-11 00:46:57 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:46:57.084959 | orchestrator | 2026-04-11 00:46:57 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:46:57.085573 | orchestrator | 2026-04-11 00:46:57 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state STARTED 2026-04-11 00:46:57.086197 | orchestrator | 2026-04-11 00:46:57 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:46:57.086798 | orchestrator | 2026-04-11 00:46:57 | INFO  | Task 544eb7d9-b1ea-4ec2-91fd-97e765db50a1 is in state STARTED 2026-04-11 00:46:57.087362 | orchestrator | 2026-04-11 00:46:57 | INFO  | Task 2206daab-52a0-421a-85ca-982db2e5f0bb is in state STARTED 2026-04-11 00:46:57.087405 | orchestrator | 2026-04-11 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:00.129290 | orchestrator | 2026-04-11 00:47:00 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:47:00.129349 | orchestrator | 2026-04-11 00:47:00 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:00.130138 | orchestrator | 2026-04-11 00:47:00 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:47:00.131768 | orchestrator | 2026-04-11 00:47:00 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:00.132297 | orchestrator | 2026-04-11 00:47:00 | INFO  | Task 805f4d37-b2d8-4a8d-97e3-b1bf26caade2 is in state SUCCESS 2026-04-11 00:47:00.132550 | orchestrator | 2026-04-11 00:47:00.132575 | orchestrator | 2026-04-11 00:47:00.132580 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:47:00.132584 | orchestrator | 2026-04-11 00:47:00.132587 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:47:00.132591 | orchestrator | Saturday 11 April 2026 00:45:42 +0000 (0:00:00.578) 0:00:00.578 ******** 2026-04-11 00:47:00.132594 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-11 00:47:00.132598 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-11 00:47:00.132601 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-11 00:47:00.132604 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-11 00:47:00.132607 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-11 00:47:00.132610 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-11 00:47:00.132614 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-11 00:47:00.132617 | orchestrator | 2026-04-11 00:47:00.132620 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-11 00:47:00.132623 | orchestrator | 2026-04-11 00:47:00.132626 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-11 00:47:00.132629 | orchestrator | Saturday 11 April 2026 00:45:43 +0000 (0:00:01.387) 0:00:01.966 ******** 2026-04-11 00:47:00.132633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-11 00:47:00.132643 | orchestrator | 2026-04-11 00:47:00.132658 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-11 00:47:00.132662 | orchestrator | Saturday 11 April 2026 00:45:45 +0000 (0:00:02.169) 0:00:04.136 ******** 2026-04-11 00:47:00.132665 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:00.132669 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:00.132672 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:47:00.132675 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:47:00.132678 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:47:00.132681 | orchestrator | ok: [testbed-manager] 2026-04-11 00:47:00.132685 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:00.132690 | orchestrator | 2026-04-11 00:47:00.132697 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-11 00:47:00.132704 | orchestrator | Saturday 11 April 2026 00:45:47 +0000 (0:00:02.293) 0:00:06.430 ******** 2026-04-11 00:47:00.132709 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:00.132713 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:00.132718 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:00.132722 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:47:00.132727 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:47:00.132731 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:47:00.132736 | orchestrator | ok: [testbed-manager] 2026-04-11 00:47:00.132741 | orchestrator | 2026-04-11 00:47:00.132745 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-11 00:47:00.132750 | orchestrator | Saturday 11 April 2026 00:45:52 +0000 (0:00:04.386) 0:00:10.816 ******** 2026-04-11 00:47:00.132754 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:47:00.132759 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:47:00.132763 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:47:00.132768 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:47:00.132772 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:47:00.132777 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:47:00.132782 | orchestrator | changed: [testbed-manager] 2026-04-11 00:47:00.132787 | orchestrator | 2026-04-11 00:47:00.132792 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-11 00:47:00.132805 | orchestrator | Saturday 11 April 2026 00:45:54 +0000 (0:00:02.305) 0:00:13.122 ******** 2026-04-11 00:47:00.132811 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:47:00.132817 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:47:00.132821 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:47:00.132829 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:47:00.132832 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:47:00.132835 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:47:00.132838 | orchestrator | changed: [testbed-manager] 2026-04-11 00:47:00.132841 | orchestrator | 2026-04-11 00:47:00.132845 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-11 00:47:00.132848 | orchestrator | Saturday 11 April 2026 00:46:05 +0000 (0:00:11.113) 0:00:24.235 ******** 2026-04-11 00:47:00.132851 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:47:00.132854 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:47:00.132857 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:47:00.132860 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:47:00.132863 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:47:00.132866 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:47:00.132869 | orchestrator | changed: [testbed-manager] 2026-04-11 00:47:00.132873 | orchestrator | 2026-04-11 00:47:00.132876 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-11 00:47:00.132879 | orchestrator | Saturday 11 April 2026 00:46:30 +0000 (0:00:24.353) 0:00:48.588 ******** 2026-04-11 00:47:00.132882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:47:00.132886 | orchestrator | 2026-04-11 00:47:00.132889 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-11 00:47:00.132893 | orchestrator | Saturday 11 April 2026 00:46:31 +0000 (0:00:01.919) 0:00:50.508 ******** 2026-04-11 00:47:00.132896 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-11 00:47:00.132899 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-11 00:47:00.132903 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-11 00:47:00.132906 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-11 00:47:00.132914 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-11 00:47:00.132918 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-11 00:47:00.132921 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-11 00:47:00.132924 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-11 00:47:00.132927 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-11 00:47:00.132930 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-11 00:47:00.132934 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-11 00:47:00.132937 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-11 00:47:00.132940 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-11 00:47:00.132943 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-11 00:47:00.132946 | orchestrator | 2026-04-11 00:47:00.132949 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-11 00:47:00.132953 | orchestrator | Saturday 11 April 2026 00:46:38 +0000 (0:00:06.117) 0:00:56.626 ******** 2026-04-11 00:47:00.132956 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:00.132959 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:00.132962 | orchestrator | ok: [testbed-manager] 2026-04-11 00:47:00.132965 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:00.132968 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:47:00.132971 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:47:00.132974 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:47:00.132978 | orchestrator | 2026-04-11 00:47:00.132981 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-11 00:47:00.132987 | orchestrator | Saturday 11 April 2026 00:46:40 +0000 (0:00:02.191) 0:00:58.817 ******** 2026-04-11 00:47:00.132990 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:47:00.132993 | orchestrator | changed: [testbed-manager] 2026-04-11 00:47:00.132997 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:47:00.133000 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:47:00.133004 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:47:00.133009 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:47:00.133017 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:47:00.133023 | orchestrator | 2026-04-11 00:47:00.133027 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-11 00:47:00.133032 | orchestrator | Saturday 11 April 2026 00:46:41 +0000 (0:00:01.337) 0:01:00.155 ******** 2026-04-11 00:47:00.133037 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:00.133042 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:00.133047 | orchestrator | ok: [testbed-manager] 2026-04-11 00:47:00.133052 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:00.133057 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:47:00.133062 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:47:00.133068 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:47:00.133073 | orchestrator | 2026-04-11 00:47:00.133078 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-11 00:47:00.133083 | orchestrator | Saturday 11 April 2026 00:46:43 +0000 (0:00:01.416) 0:01:01.571 ******** 2026-04-11 00:47:00.133089 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:47:00.133094 | orchestrator | ok: [testbed-manager] 2026-04-11 00:47:00.133099 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:00.133104 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:00.133109 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:47:00.133115 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:00.133120 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:47:00.133125 | orchestrator | 2026-04-11 00:47:00.133130 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-11 00:47:00.133135 | orchestrator | Saturday 11 April 2026 00:46:44 +0000 (0:00:01.706) 0:01:03.278 ******** 2026-04-11 00:47:00.133140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-11 00:47:00.133147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:47:00.133153 | orchestrator | 2026-04-11 00:47:00.133162 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-11 00:47:00.133168 | orchestrator | Saturday 11 April 2026 00:46:45 +0000 (0:00:01.121) 0:01:04.399 ******** 2026-04-11 00:47:00.133174 | orchestrator | changed: [testbed-manager] 2026-04-11 00:47:00.133179 | orchestrator | 2026-04-11 00:47:00.133185 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-11 00:47:00.133191 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:01.616) 0:01:06.016 ******** 2026-04-11 00:47:00.133197 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:47:00.133202 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:47:00.133208 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:47:00.133214 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:47:00.133220 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:47:00.133225 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:47:00.133231 | orchestrator | changed: [testbed-manager] 2026-04-11 00:47:00.133237 | orchestrator | 2026-04-11 00:47:00.133242 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:47:00.133248 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:47:00.133255 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:47:00.133266 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:47:00.133272 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:47:00.133282 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:47:00.133289 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:47:00.133296 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:47:00.133302 | orchestrator | 2026-04-11 00:47:00.133307 | orchestrator | 2026-04-11 00:47:00.133313 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:47:00.133319 | orchestrator | Saturday 11 April 2026 00:46:58 +0000 (0:00:11.364) 0:01:17.380 ******** 2026-04-11 00:47:00.133324 | orchestrator | =============================================================================== 2026-04-11 00:47:00.133330 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.35s 2026-04-11 00:47:00.133335 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.37s 2026-04-11 00:47:00.133341 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.11s 2026-04-11 00:47:00.133347 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.12s 2026-04-11 00:47:00.133352 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.39s 2026-04-11 00:47:00.133358 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.31s 2026-04-11 00:47:00.133364 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.29s 2026-04-11 00:47:00.133369 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.19s 2026-04-11 00:47:00.133375 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.17s 2026-04-11 00:47:00.133380 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.92s 2026-04-11 00:47:00.133386 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.71s 2026-04-11 00:47:00.133392 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.62s 2026-04-11 00:47:00.133397 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.42s 2026-04-11 00:47:00.133403 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.39s 2026-04-11 00:47:00.133409 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.34s 2026-04-11 00:47:00.133414 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.12s 2026-04-11 00:47:00.133494 | orchestrator | 2026-04-11 00:47:00 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:00.135405 | orchestrator | 2026-04-11 00:47:00 | INFO  | Task 544eb7d9-b1ea-4ec2-91fd-97e765db50a1 is in state STARTED 2026-04-11 00:47:00.136107 | orchestrator | 2026-04-11 00:47:00 | INFO  | Task 2206daab-52a0-421a-85ca-982db2e5f0bb is in state STARTED 2026-04-11 00:47:00.136175 | orchestrator | 2026-04-11 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:03.181115 | orchestrator | 2026-04-11 00:47:03 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:47:03.181448 | orchestrator | 2026-04-11 00:47:03 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:03.182545 | orchestrator | 2026-04-11 00:47:03 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state STARTED 2026-04-11 00:47:03.183385 | orchestrator | 2026-04-11 00:47:03 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:03.184352 | orchestrator | 2026-04-11 00:47:03 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:03.185782 | orchestrator | 2026-04-11 00:47:03 | INFO  | Task 544eb7d9-b1ea-4ec2-91fd-97e765db50a1 is in state STARTED 2026-04-11 00:47:03.187118 | orchestrator | 2026-04-11 00:47:03 | INFO  | Task 2206daab-52a0-421a-85ca-982db2e5f0bb is in state STARTED 2026-04-11 00:47:03.187257 | orchestrator | 2026-04-11 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:06.224046 | orchestrator | 2026-04-11 00:47:06 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:47:06.224764 | orchestrator | 2026-04-11 00:47:06 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:06.225384 | orchestrator | 2026-04-11 00:47:06 | INFO  | Task c4b11fa6-aaf7-4730-b252-ba7b6d837525 is in state SUCCESS 2026-04-11 00:47:06.226323 | orchestrator | 2026-04-11 00:47:06 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:06.237953 | orchestrator | 2026-04-11 00:47:06 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:06.238004 | orchestrator | 2026-04-11 00:47:06 | INFO  | Task 544eb7d9-b1ea-4ec2-91fd-97e765db50a1 is in state STARTED 2026-04-11 00:47:06.238045 | orchestrator | 2026-04-11 00:47:06 | INFO  | Task 2206daab-52a0-421a-85ca-982db2e5f0bb is in state STARTED 2026-04-11 00:47:06.238057 | orchestrator | 2026-04-11 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:09.249979 | orchestrator | 2026-04-11 00:47:09 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:47:09.250180 | orchestrator | 2026-04-11 00:47:09 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:09.251249 | orchestrator | 2026-04-11 00:47:09 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:09.252243 | orchestrator | 2026-04-11 00:47:09 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:09.252529 | orchestrator | 2026-04-11 00:47:09 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:09.254466 | orchestrator | 2026-04-11 00:47:09 | INFO  | Task 544eb7d9-b1ea-4ec2-91fd-97e765db50a1 is in state SUCCESS 2026-04-11 00:47:09.255730 | orchestrator | 2026-04-11 00:47:09.255762 | orchestrator | 2026-04-11 00:47:09.255770 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-11 00:47:09.255778 | orchestrator | 2026-04-11 00:47:09.255785 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-11 00:47:09.255871 | orchestrator | Saturday 11 April 2026 00:46:00 +0000 (0:00:00.877) 0:00:00.877 ******** 2026-04-11 00:47:09.255881 | orchestrator | ok: [testbed-manager] 2026-04-11 00:47:09.255889 | orchestrator | 2026-04-11 00:47:09.255896 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-11 00:47:09.255903 | orchestrator | Saturday 11 April 2026 00:46:02 +0000 (0:00:02.072) 0:00:02.950 ******** 2026-04-11 00:47:09.255910 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-11 00:47:09.255917 | orchestrator | 2026-04-11 00:47:09.255923 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-11 00:47:09.255930 | orchestrator | Saturday 11 April 2026 00:46:03 +0000 (0:00:01.535) 0:00:04.485 ******** 2026-04-11 00:47:09.255990 | orchestrator | changed: [testbed-manager] 2026-04-11 00:47:09.255998 | orchestrator | 2026-04-11 00:47:09.256005 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-11 00:47:09.256038 | orchestrator | Saturday 11 April 2026 00:46:06 +0000 (0:00:02.999) 0:00:07.485 ******** 2026-04-11 00:47:09.256046 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-11 00:47:09.256053 | orchestrator | ok: [testbed-manager] 2026-04-11 00:47:09.256060 | orchestrator | 2026-04-11 00:47:09.256067 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-11 00:47:09.256074 | orchestrator | Saturday 11 April 2026 00:47:00 +0000 (0:00:54.186) 0:01:01.671 ******** 2026-04-11 00:47:09.256081 | orchestrator | changed: [testbed-manager] 2026-04-11 00:47:09.256088 | orchestrator | 2026-04-11 00:47:09.256095 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:47:09.256102 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:47:09.256110 | orchestrator | 2026-04-11 00:47:09.256117 | orchestrator | 2026-04-11 00:47:09.256228 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:47:09.256239 | orchestrator | Saturday 11 April 2026 00:47:03 +0000 (0:00:03.065) 0:01:04.737 ******** 2026-04-11 00:47:09.256246 | orchestrator | =============================================================================== 2026-04-11 00:47:09.256253 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 54.19s 2026-04-11 00:47:09.256261 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.07s 2026-04-11 00:47:09.256268 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.00s 2026-04-11 00:47:09.256275 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.07s 2026-04-11 00:47:09.256282 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.54s 2026-04-11 00:47:09.256289 | orchestrator | 2026-04-11 00:47:09.256297 | orchestrator | 2026-04-11 00:47:09.256304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:47:09.256311 | orchestrator | 2026-04-11 00:47:09.256318 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:47:09.256325 | orchestrator | Saturday 11 April 2026 00:46:55 +0000 (0:00:00.466) 0:00:00.466 ******** 2026-04-11 00:47:09.256332 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:09.256339 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:09.256346 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:09.256353 | orchestrator | 2026-04-11 00:47:09.256360 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:47:09.256367 | orchestrator | Saturday 11 April 2026 00:46:55 +0000 (0:00:00.617) 0:00:01.084 ******** 2026-04-11 00:47:09.256374 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-11 00:47:09.256381 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-11 00:47:09.256388 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-11 00:47:09.256396 | orchestrator | 2026-04-11 00:47:09.256404 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-11 00:47:09.256411 | orchestrator | 2026-04-11 00:47:09.256418 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-11 00:47:09.256426 | orchestrator | Saturday 11 April 2026 00:46:56 +0000 (0:00:00.273) 0:00:01.358 ******** 2026-04-11 00:47:09.256433 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:47:09.256441 | orchestrator | 2026-04-11 00:47:09.256448 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-11 00:47:09.256455 | orchestrator | Saturday 11 April 2026 00:46:56 +0000 (0:00:00.458) 0:00:01.817 ******** 2026-04-11 00:47:09.256479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256572 | orchestrator | 2026-04-11 00:47:09.256578 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-11 00:47:09.256584 | orchestrator | Saturday 11 April 2026 00:46:58 +0000 (0:00:01.716) 0:00:03.534 ******** 2026-04-11 00:47:09.256592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256703 | orchestrator | 2026-04-11 00:47:09.256710 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-11 00:47:09.256717 | orchestrator | Saturday 11 April 2026 00:47:00 +0000 (0:00:02.641) 0:00:06.175 ******** 2026-04-11 00:47:09.256730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256796 | orchestrator | 2026-04-11 00:47:09.256804 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-11 00:47:09.256811 | orchestrator | Saturday 11 April 2026 00:47:03 +0000 (0:00:02.973) 0:00:09.148 ******** 2026-04-11 00:47:09.256818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 00:47:09.256870 | orchestrator | 2026-04-11 00:47:09.256878 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-11 00:47:09.256887 | orchestrator | Saturday 11 April 2026 00:47:05 +0000 (0:00:01.972) 0:00:11.120 ******** 2026-04-11 00:47:09.256895 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:47:09.256905 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:09.256912 | orchestrator | } 2026-04-11 00:47:09.256921 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:47:09.256929 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:09.256936 | orchestrator | } 2026-04-11 00:47:09.256944 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:47:09.256953 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:09.256960 | orchestrator | } 2026-04-11 00:47:09.256968 | orchestrator | 2026-04-11 00:47:09.256976 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:47:09.256984 | orchestrator | Saturday 11 April 2026 00:47:06 +0000 (0:00:00.397) 0:00:11.518 ******** 2026-04-11 00:47:09.256993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-11 00:47:09.257007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-11 00:47:09.257015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-11 00:47:09.257024 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:09.257036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-11 00:47:09.257045 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:09.257057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-11 00:47:09.257065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-11 00:47:09.257072 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:09.257079 | orchestrator | 2026-04-11 00:47:09.257087 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 00:47:09.257094 | orchestrator | Saturday 11 April 2026 00:47:07 +0000 (0:00:00.935) 0:00:12.453 ******** 2026-04-11 00:47:09.257101 | orchestrator | 2026-04-11 00:47:09.257108 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 00:47:09.257115 | orchestrator | Saturday 11 April 2026 00:47:07 +0000 (0:00:00.070) 0:00:12.524 ******** 2026-04-11 00:47:09.257122 | orchestrator | 2026-04-11 00:47:09.257128 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 00:47:09.257135 | orchestrator | Saturday 11 April 2026 00:47:07 +0000 (0:00:00.064) 0:00:12.589 ******** 2026-04-11 00:47:09.257143 | orchestrator | 2026-04-11 00:47:09.257149 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-11 00:47:09.257156 | orchestrator | Saturday 11 April 2026 00:47:07 +0000 (0:00:00.083) 0:00:12.672 ******** 2026-04-11 00:47:09.257177 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_e85lf27x/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_e85lf27x/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_e85lf27x/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_e85lf27x/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-11 00:47:09.257204 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_roj9gpa4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_roj9gpa4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_roj9gpa4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_roj9gpa4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-11 00:47:09.257217 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_2awm2hq8/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_2awm2hq8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_2awm2hq8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_2awm2hq8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-11 00:47:09.257229 | orchestrator | 2026-04-11 00:47:09.257236 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:47:09.257244 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:09.257252 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:09.257259 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:09.257267 | orchestrator | 2026-04-11 00:47:09.257274 | orchestrator | 2026-04-11 00:47:09.257281 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:47:09.257288 | orchestrator | Saturday 11 April 2026 00:47:08 +0000 (0:00:01.426) 0:00:14.098 ******** 2026-04-11 00:47:09.257300 | orchestrator | =============================================================================== 2026-04-11 00:47:09.257308 | orchestrator | redis : Copying over redis config files --------------------------------- 2.97s 2026-04-11 00:47:09.257315 | orchestrator | redis : Copying over default config.json files -------------------------- 2.64s 2026-04-11 00:47:09.257322 | orchestrator | service-check-containers : redis | Check containers --------------------- 1.97s 2026-04-11 00:47:09.257329 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.72s 2026-04-11 00:47:09.257336 | orchestrator | redis : Restart redis container ----------------------------------------- 1.43s 2026-04-11 00:47:09.257343 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.94s 2026-04-11 00:47:09.257350 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2026-04-11 00:47:09.257357 | orchestrator | redis : include_tasks --------------------------------------------------- 0.46s 2026-04-11 00:47:09.257364 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.40s 2026-04-11 00:47:09.257371 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-11 00:47:09.257378 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-04-11 00:47:09.257389 | orchestrator | 2026-04-11 00:47:09 | INFO  | Task 2206daab-52a0-421a-85ca-982db2e5f0bb is in state SUCCESS 2026-04-11 00:47:09.257397 | orchestrator | 2026-04-11 00:47:09.257404 | orchestrator | 2026-04-11 00:47:09.257411 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:47:09.257418 | orchestrator | 2026-04-11 00:47:09.257425 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:47:09.257432 | orchestrator | Saturday 11 April 2026 00:46:56 +0000 (0:00:00.396) 0:00:00.396 ******** 2026-04-11 00:47:09.257439 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:09.257446 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:09.257453 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:09.257460 | orchestrator | 2026-04-11 00:47:09.257466 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:47:09.257473 | orchestrator | Saturday 11 April 2026 00:46:56 +0000 (0:00:00.746) 0:00:01.142 ******** 2026-04-11 00:47:09.257480 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-11 00:47:09.257487 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-11 00:47:09.257495 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-11 00:47:09.257502 | orchestrator | 2026-04-11 00:47:09.257509 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-11 00:47:09.257516 | orchestrator | 2026-04-11 00:47:09.257526 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-11 00:47:09.257533 | orchestrator | Saturday 11 April 2026 00:46:57 +0000 (0:00:00.360) 0:00:01.503 ******** 2026-04-11 00:47:09.257540 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:47:09.257548 | orchestrator | 2026-04-11 00:47:09.257554 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-11 00:47:09.257559 | orchestrator | Saturday 11 April 2026 00:46:57 +0000 (0:00:00.623) 0:00:02.126 ******** 2026-04-11 00:47:09.257565 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-11 00:47:09.257571 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-11 00:47:09.257578 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-11 00:47:09.257584 | orchestrator | 2026-04-11 00:47:09.257590 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-11 00:47:09.257597 | orchestrator | Saturday 11 April 2026 00:46:59 +0000 (0:00:01.501) 0:00:03.629 ******** 2026-04-11 00:47:09.257604 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-11 00:47:09.257611 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-11 00:47:09.257618 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-11 00:47:09.257625 | orchestrator | 2026-04-11 00:47:09.257632 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-11 00:47:09.257658 | orchestrator | Saturday 11 April 2026 00:47:01 +0000 (0:00:01.725) 0:00:05.354 ******** 2026-04-11 00:47:09.257666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 00:47:09.257680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 00:47:09.257694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 00:47:09.257701 | orchestrator | 2026-04-11 00:47:09.257708 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-11 00:47:09.257713 | orchestrator | Saturday 11 April 2026 00:47:02 +0000 (0:00:01.510) 0:00:06.865 ******** 2026-04-11 00:47:09.257719 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:47:09.257725 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:09.257732 | orchestrator | } 2026-04-11 00:47:09.257740 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:47:09.257748 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:09.257756 | orchestrator | } 2026-04-11 00:47:09.257764 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:47:09.257771 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:09.257779 | orchestrator | } 2026-04-11 00:47:09.257787 | orchestrator | 2026-04-11 00:47:09.257795 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:47:09.257803 | orchestrator | Saturday 11 April 2026 00:47:03 +0000 (0:00:00.478) 0:00:07.343 ******** 2026-04-11 00:47:09.257816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 00:47:09.257825 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:09.257833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 00:47:09.257846 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:09.257859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 00:47:09.257867 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:09.257876 | orchestrator | 2026-04-11 00:47:09.257883 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-11 00:47:09.257892 | orchestrator | Saturday 11 April 2026 00:47:04 +0000 (0:00:01.757) 0:00:09.101 ******** 2026-04-11 00:47:09.257900 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_q7ucoxk9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_q7ucoxk9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_q7ucoxk9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_q7ucoxk9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-11 00:47:09.257915 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_o494debf/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_o494debf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_o494debf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_o494debf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-11 00:47:09.257935 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_isgwc03m/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_isgwc03m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_isgwc03m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_isgwc03m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-11 00:47:09.257949 | orchestrator | 2026-04-11 00:47:09.257957 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:47:09.257964 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:09.257974 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:09.257981 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:09.257989 | orchestrator | 2026-04-11 00:47:09.257996 | orchestrator | 2026-04-11 00:47:09.258004 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:47:09.258011 | orchestrator | Saturday 11 April 2026 00:47:06 +0000 (0:00:01.710) 0:00:10.812 ******** 2026-04-11 00:47:09.258063 | orchestrator | =============================================================================== 2026-04-11 00:47:09.258071 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.76s 2026-04-11 00:47:09.258078 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.73s 2026-04-11 00:47:09.258085 | orchestrator | memcached : Restart memcached container --------------------------------- 1.71s 2026-04-11 00:47:09.258093 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.51s 2026-04-11 00:47:09.258101 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.50s 2026-04-11 00:47:09.258107 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2026-04-11 00:47:09.258114 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.62s 2026-04-11 00:47:09.258122 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.48s 2026-04-11 00:47:09.258129 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-04-11 00:47:09.258137 | orchestrator | 2026-04-11 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:12.313919 | orchestrator | 2026-04-11 00:47:12 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:47:12.314566 | orchestrator | 2026-04-11 00:47:12 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:12.318834 | orchestrator | 2026-04-11 00:47:12 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:12.319765 | orchestrator | 2026-04-11 00:47:12 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:12.321555 | orchestrator | 2026-04-11 00:47:12 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:12.321599 | orchestrator | 2026-04-11 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:15.379880 | orchestrator | 2026-04-11 00:47:15 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:47:15.381104 | orchestrator | 2026-04-11 00:47:15 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:15.383316 | orchestrator | 2026-04-11 00:47:15 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:15.385473 | orchestrator | 2026-04-11 00:47:15 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:15.386381 | orchestrator | 2026-04-11 00:47:15 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:15.386517 | orchestrator | 2026-04-11 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:18.433975 | orchestrator | 2026-04-11 00:47:18 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state STARTED 2026-04-11 00:47:18.437354 | orchestrator | 2026-04-11 00:47:18 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:18.440912 | orchestrator | 2026-04-11 00:47:18 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:18.442951 | orchestrator | 2026-04-11 00:47:18 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:18.444068 | orchestrator | 2026-04-11 00:47:18 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:18.444952 | orchestrator | 2026-04-11 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:21.481897 | orchestrator | 2026-04-11 00:47:21 | INFO  | Task f58363f8-6bc7-4dfb-aa8b-b3f6e88a2521 is in state SUCCESS 2026-04-11 00:47:21.483551 | orchestrator | 2026-04-11 00:47:21.483595 | orchestrator | 2026-04-11 00:47:21.483603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:47:21.483608 | orchestrator | 2026-04-11 00:47:21.483614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:47:21.483623 | orchestrator | Saturday 11 April 2026 00:46:54 +0000 (0:00:00.365) 0:00:00.365 ******** 2026-04-11 00:47:21.483638 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:21.483645 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:21.483651 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:21.483667 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:47:21.483672 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:47:21.483678 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:47:21.483683 | orchestrator | 2026-04-11 00:47:21.483688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:47:21.483694 | orchestrator | Saturday 11 April 2026 00:46:55 +0000 (0:00:00.452) 0:00:00.817 ******** 2026-04-11 00:47:21.483699 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 00:47:21.483705 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 00:47:21.483711 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 00:47:21.483717 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 00:47:21.483722 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 00:47:21.483728 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 00:47:21.483734 | orchestrator | 2026-04-11 00:47:21.483739 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-11 00:47:21.483745 | orchestrator | 2026-04-11 00:47:21.483749 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-11 00:47:21.483753 | orchestrator | Saturday 11 April 2026 00:46:55 +0000 (0:00:00.631) 0:00:01.449 ******** 2026-04-11 00:47:21.483757 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:47:21.483761 | orchestrator | 2026-04-11 00:47:21.483765 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 00:47:21.483769 | orchestrator | Saturday 11 April 2026 00:46:57 +0000 (0:00:01.548) 0:00:02.998 ******** 2026-04-11 00:47:21.483790 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-11 00:47:21.483796 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-11 00:47:21.483801 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-11 00:47:21.483806 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-11 00:47:21.483811 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-11 00:47:21.483817 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-11 00:47:21.483822 | orchestrator | 2026-04-11 00:47:21.483827 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 00:47:21.483878 | orchestrator | Saturday 11 April 2026 00:46:58 +0000 (0:00:01.250) 0:00:04.248 ******** 2026-04-11 00:47:21.483884 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-11 00:47:21.483891 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-11 00:47:21.483896 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-11 00:47:21.483940 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-11 00:47:21.484314 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-11 00:47:21.484337 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-11 00:47:21.484343 | orchestrator | 2026-04-11 00:47:21.484349 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 00:47:21.484354 | orchestrator | Saturday 11 April 2026 00:47:01 +0000 (0:00:02.355) 0:00:06.604 ******** 2026-04-11 00:47:21.484359 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-11 00:47:21.484364 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-11 00:47:21.484369 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:21.484375 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-11 00:47:21.484381 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:21.484386 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-11 00:47:21.484391 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:21.484396 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-11 00:47:21.484402 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:47:21.484407 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:47:21.484412 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-11 00:47:21.484417 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:47:21.484422 | orchestrator | 2026-04-11 00:47:21.484427 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-11 00:47:21.484433 | orchestrator | Saturday 11 April 2026 00:47:02 +0000 (0:00:01.347) 0:00:07.951 ******** 2026-04-11 00:47:21.484439 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:21.484444 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:21.484449 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:21.484455 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:47:21.484460 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:47:21.484465 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:47:21.484471 | orchestrator | 2026-04-11 00:47:21.484476 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-11 00:47:21.484482 | orchestrator | Saturday 11 April 2026 00:47:03 +0000 (0:00:00.835) 0:00:08.787 ******** 2026-04-11 00:47:21.484500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484571 | orchestrator | 2026-04-11 00:47:21.484574 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-11 00:47:21.484578 | orchestrator | Saturday 11 April 2026 00:47:05 +0000 (0:00:02.064) 0:00:10.851 ******** 2026-04-11 00:47:21.484581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484663 | orchestrator | 2026-04-11 00:47:21.484666 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-11 00:47:21.484670 | orchestrator | Saturday 11 April 2026 00:47:08 +0000 (0:00:02.866) 0:00:13.718 ******** 2026-04-11 00:47:21.484673 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:21.484676 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:21.484680 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:21.484683 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:47:21.484686 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:47:21.484690 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:47:21.484693 | orchestrator | 2026-04-11 00:47:21.484696 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-11 00:47:21.484700 | orchestrator | Saturday 11 April 2026 00:47:08 +0000 (0:00:00.711) 0:00:14.429 ******** 2026-04-11 00:47:21.484703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 00:47:21.484760 | orchestrator | 2026-04-11 00:47:21.484764 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-11 00:47:21.484767 | orchestrator | Saturday 11 April 2026 00:47:11 +0000 (0:00:02.480) 0:00:16.910 ******** 2026-04-11 00:47:21.484770 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:47:21.484774 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:21.484777 | orchestrator | } 2026-04-11 00:47:21.484781 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:47:21.484784 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:21.484788 | orchestrator | } 2026-04-11 00:47:21.484791 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:47:21.484794 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:21.484800 | orchestrator | } 2026-04-11 00:47:21.484804 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 00:47:21.484807 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:21.484811 | orchestrator | } 2026-04-11 00:47:21.484814 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 00:47:21.484820 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:21.484823 | orchestrator | } 2026-04-11 00:47:21.484826 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 00:47:21.484830 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:21.484833 | orchestrator | } 2026-04-11 00:47:21.484836 | orchestrator | 2026-04-11 00:47:21.484840 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:47:21.484843 | orchestrator | Saturday 11 April 2026 00:47:12 +0000 (0:00:00.863) 0:00:17.774 ******** 2026-04-11 00:47:21.484847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 00:47:21.484854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 00:47:21.484857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 00:47:21.484861 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:21.484864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 00:47:21.484868 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:21.484873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 00:47:21.484879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 00:47:21.484882 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:21.484886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 00:47:21.484891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 00:47:21.484895 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:47:21.484898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 00:47:21.484902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 00:47:21.484907 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:47:21.484912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 00:47:21.484916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 00:47:21.484920 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:47:21.484923 | orchestrator | 2026-04-11 00:47:21.484926 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 00:47:21.484930 | orchestrator | Saturday 11 April 2026 00:47:14 +0000 (0:00:02.314) 0:00:20.089 ******** 2026-04-11 00:47:21.484933 | orchestrator | 2026-04-11 00:47:21.484937 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 00:47:21.484941 | orchestrator | Saturday 11 April 2026 00:47:15 +0000 (0:00:00.538) 0:00:20.628 ******** 2026-04-11 00:47:21.484944 | orchestrator | 2026-04-11 00:47:21.484950 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 00:47:21.484954 | orchestrator | Saturday 11 April 2026 00:47:15 +0000 (0:00:00.181) 0:00:20.809 ******** 2026-04-11 00:47:21.484958 | orchestrator | 2026-04-11 00:47:21.484962 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 00:47:21.484965 | orchestrator | Saturday 11 April 2026 00:47:15 +0000 (0:00:00.145) 0:00:20.955 ******** 2026-04-11 00:47:21.484969 | orchestrator | 2026-04-11 00:47:21.484973 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 00:47:21.484976 | orchestrator | Saturday 11 April 2026 00:47:15 +0000 (0:00:00.131) 0:00:21.086 ******** 2026-04-11 00:47:21.484980 | orchestrator | 2026-04-11 00:47:21.484984 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 00:47:21.484988 | orchestrator | Saturday 11 April 2026 00:47:15 +0000 (0:00:00.142) 0:00:21.229 ******** 2026-04-11 00:47:21.484991 | orchestrator | 2026-04-11 00:47:21.484995 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-11 00:47:21.484999 | orchestrator | Saturday 11 April 2026 00:47:15 +0000 (0:00:00.183) 0:00:21.412 ******** 2026-04-11 00:47:21.485004 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_0sjh_mk3/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_0sjh_mk3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_0sjh_mk3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_0sjh_mk3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-11 00:47:21.485265 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_y0lsa53k/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_y0lsa53k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_y0lsa53k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_y0lsa53k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-11 00:47:21.485278 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_c45lz5h7/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_c45lz5h7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_c45lz5h7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_c45lz5h7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-11 00:47:21.485286 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_eq7ndxaq/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_eq7ndxaq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_eq7ndxaq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_eq7ndxaq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-11 00:47:21.485297 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_72tytu7y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_72tytu7y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_72tytu7y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_72tytu7y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-11 00:47:21.485305 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ryvo5o3l/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ryvo5o3l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ryvo5o3l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ryvo5o3l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-11 00:47:21.485310 | orchestrator | 2026-04-11 00:47:21.485313 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:47:21.485318 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 00:47:21.485324 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 00:47:21.485329 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 00:47:21.485332 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 00:47:21.485336 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 00:47:21.485340 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 00:47:21.485344 | orchestrator | 2026-04-11 00:47:21.485348 | orchestrator | 2026-04-11 00:47:21.485352 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:47:21.485358 | orchestrator | Saturday 11 April 2026 00:47:18 +0000 (0:00:02.928) 0:00:24.341 ******** 2026-04-11 00:47:21.485361 | orchestrator | =============================================================================== 2026-04-11 00:47:21.485365 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 2.93s 2026-04-11 00:47:21.485370 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.87s 2026-04-11 00:47:21.485373 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.48s 2026-04-11 00:47:21.485376 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.36s 2026-04-11 00:47:21.485380 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.32s 2026-04-11 00:47:21.485383 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.06s 2026-04-11 00:47:21.485386 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.55s 2026-04-11 00:47:21.485390 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.35s 2026-04-11 00:47:21.485393 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.32s 2026-04-11 00:47:21.485396 | orchestrator | module-load : Load modules ---------------------------------------------- 1.25s 2026-04-11 00:47:21.485399 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.86s 2026-04-11 00:47:21.485403 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.84s 2026-04-11 00:47:21.485406 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.71s 2026-04-11 00:47:21.485410 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-04-11 00:47:21.485413 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2026-04-11 00:47:21.486141 | orchestrator | 2026-04-11 00:47:21 | INFO  | Task d2581b0f-00ce-4c22-924c-6b64f4bb6894 is in state STARTED 2026-04-11 00:47:21.486949 | orchestrator | 2026-04-11 00:47:21 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:21.487446 | orchestrator | 2026-04-11 00:47:21 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:21.490104 | orchestrator | 2026-04-11 00:47:21 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:21.491254 | orchestrator | 2026-04-11 00:47:21 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:21.491495 | orchestrator | 2026-04-11 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:24.543178 | orchestrator | 2026-04-11 00:47:24 | INFO  | Task d2581b0f-00ce-4c22-924c-6b64f4bb6894 is in state STARTED 2026-04-11 00:47:24.543928 | orchestrator | 2026-04-11 00:47:24 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:24.545404 | orchestrator | 2026-04-11 00:47:24 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:24.546133 | orchestrator | 2026-04-11 00:47:24 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:24.550573 | orchestrator | 2026-04-11 00:47:24 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:24.553446 | orchestrator | 2026-04-11 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:27.585231 | orchestrator | 2026-04-11 00:47:27 | INFO  | Task d2581b0f-00ce-4c22-924c-6b64f4bb6894 is in state STARTED 2026-04-11 00:47:27.585300 | orchestrator | 2026-04-11 00:47:27 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:27.585686 | orchestrator | 2026-04-11 00:47:27 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:27.586337 | orchestrator | 2026-04-11 00:47:27 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:27.586828 | orchestrator | 2026-04-11 00:47:27 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:27.586853 | orchestrator | 2026-04-11 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:30.616441 | orchestrator | 2026-04-11 00:47:30 | INFO  | Task d2581b0f-00ce-4c22-924c-6b64f4bb6894 is in state STARTED 2026-04-11 00:47:30.618545 | orchestrator | 2026-04-11 00:47:30 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:30.621293 | orchestrator | 2026-04-11 00:47:30 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:30.623219 | orchestrator | 2026-04-11 00:47:30 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:30.623726 | orchestrator | 2026-04-11 00:47:30 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:30.623752 | orchestrator | 2026-04-11 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:33.651026 | orchestrator | 2026-04-11 00:47:33 | INFO  | Task d2581b0f-00ce-4c22-924c-6b64f4bb6894 is in state STARTED 2026-04-11 00:47:33.651973 | orchestrator | 2026-04-11 00:47:33 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:33.655845 | orchestrator | 2026-04-11 00:47:33 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:33.656317 | orchestrator | 2026-04-11 00:47:33 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:33.656820 | orchestrator | 2026-04-11 00:47:33 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:33.656929 | orchestrator | 2026-04-11 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:36.681901 | orchestrator | 2026-04-11 00:47:36 | INFO  | Task d2581b0f-00ce-4c22-924c-6b64f4bb6894 is in state STARTED 2026-04-11 00:47:36.682311 | orchestrator | 2026-04-11 00:47:36 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:36.682912 | orchestrator | 2026-04-11 00:47:36 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:36.683506 | orchestrator | 2026-04-11 00:47:36 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:36.684041 | orchestrator | 2026-04-11 00:47:36 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:36.684067 | orchestrator | 2026-04-11 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:39.710854 | orchestrator | 2026-04-11 00:47:39 | INFO  | Task d2581b0f-00ce-4c22-924c-6b64f4bb6894 is in state SUCCESS 2026-04-11 00:47:39.711507 | orchestrator | 2026-04-11 00:47:39.711534 | orchestrator | 2026-04-11 00:47:39.711542 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:47:39.711549 | orchestrator | 2026-04-11 00:47:39.711555 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:47:39.711563 | orchestrator | Saturday 11 April 2026 00:47:23 +0000 (0:00:00.213) 0:00:00.213 ******** 2026-04-11 00:47:39.711569 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:39.711577 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:39.711584 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:39.711590 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:47:39.711596 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:47:39.711603 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:47:39.711609 | orchestrator | 2026-04-11 00:47:39.711654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:47:39.711685 | orchestrator | Saturday 11 April 2026 00:47:23 +0000 (0:00:00.561) 0:00:00.775 ******** 2026-04-11 00:47:39.711693 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-11 00:47:39.711700 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-11 00:47:39.711716 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-11 00:47:39.711723 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-11 00:47:39.711736 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-11 00:47:39.711742 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-11 00:47:39.711748 | orchestrator | 2026-04-11 00:47:39.711755 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-11 00:47:39.711761 | orchestrator | 2026-04-11 00:47:39.711768 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-11 00:47:39.711775 | orchestrator | Saturday 11 April 2026 00:47:24 +0000 (0:00:01.168) 0:00:01.943 ******** 2026-04-11 00:47:39.711783 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:47:39.711791 | orchestrator | 2026-04-11 00:47:39.711798 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-11 00:47:39.711804 | orchestrator | Saturday 11 April 2026 00:47:26 +0000 (0:00:01.358) 0:00:03.301 ******** 2026-04-11 00:47:39.711828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711860 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711895 | orchestrator | 2026-04-11 00:47:39.711902 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-11 00:47:39.711908 | orchestrator | Saturday 11 April 2026 00:47:27 +0000 (0:00:01.380) 0:00:04.682 ******** 2026-04-11 00:47:39.711915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.711985 | orchestrator | 2026-04-11 00:47:39.711992 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-11 00:47:39.711998 | orchestrator | Saturday 11 April 2026 00:47:29 +0000 (0:00:01.690) 0:00:06.373 ******** 2026-04-11 00:47:39.712005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712037 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712043 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712056 | orchestrator | 2026-04-11 00:47:39.712063 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-11 00:47:39.712069 | orchestrator | Saturday 11 April 2026 00:47:30 +0000 (0:00:01.137) 0:00:07.510 ******** 2026-04-11 00:47:39.712076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712136 | orchestrator | 2026-04-11 00:47:39.712144 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-11 00:47:39.712150 | orchestrator | Saturday 11 April 2026 00:47:32 +0000 (0:00:01.956) 0:00:09.466 ******** 2026-04-11 00:47:39.712157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712201 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 00:47:39.712209 | orchestrator | 2026-04-11 00:47:39.712217 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-11 00:47:39.712224 | orchestrator | Saturday 11 April 2026 00:47:33 +0000 (0:00:01.524) 0:00:10.991 ******** 2026-04-11 00:47:39.712230 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:47:39.712238 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:39.712245 | orchestrator | } 2026-04-11 00:47:39.712253 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:47:39.712261 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:39.712267 | orchestrator | } 2026-04-11 00:47:39.712274 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:47:39.712281 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:39.712288 | orchestrator | } 2026-04-11 00:47:39.712295 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 00:47:39.712302 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:39.712309 | orchestrator | } 2026-04-11 00:47:39.712316 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 00:47:39.712323 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:39.712330 | orchestrator | } 2026-04-11 00:47:39.712340 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 00:47:39.712347 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:39.712353 | orchestrator | } 2026-04-11 00:47:39.712360 | orchestrator | 2026-04-11 00:47:39.712367 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:47:39.712373 | orchestrator | Saturday 11 April 2026 00:47:34 +0000 (0:00:00.610) 0:00:11.601 ******** 2026-04-11 00:47:39.712381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:47:39.712387 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:39.712394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:47:39.712401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:47:39.712408 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:39.712415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:47:39.712426 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:39.712433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:47:39.712439 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:47:39.712446 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:47:39.712455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:47:39.712462 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:47:39.712468 | orchestrator | 2026-04-11 00:47:39.712475 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-11 00:47:39.712481 | orchestrator | Saturday 11 April 2026 00:47:35 +0000 (0:00:01.309) 0:00:12.910 ******** 2026-04-11 00:47:39.712488 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:47:39.712495 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:47:39.712501 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:47:39.712507 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:47:39.712513 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:47:39.712599 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:47:39.712607 | orchestrator | 2026-04-11 00:47:39.712634 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:47:39.712641 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:39.712649 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:39.712655 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:39.712661 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:39.712667 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:39.712674 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-11 00:47:39.712680 | orchestrator | 2026-04-11 00:47:39.712686 | orchestrator | 2026-04-11 00:47:39.712692 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:47:39.712698 | orchestrator | Saturday 11 April 2026 00:47:37 +0000 (0:00:01.239) 0:00:14.150 ******** 2026-04-11 00:47:39.712709 | orchestrator | =============================================================================== 2026-04-11 00:47:39.712714 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.96s 2026-04-11 00:47:39.712720 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.69s 2026-04-11 00:47:39.712726 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 1.52s 2026-04-11 00:47:39.712732 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.38s 2026-04-11 00:47:39.712738 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.36s 2026-04-11 00:47:39.712744 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.31s 2026-04-11 00:47:39.712751 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 1.24s 2026-04-11 00:47:39.712756 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.17s 2026-04-11 00:47:39.712763 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.14s 2026-04-11 00:47:39.712769 | orchestrator | service-check-containers : ovn_controller | Notify handlers to restart containers --- 0.61s 2026-04-11 00:47:39.712775 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2026-04-11 00:47:39.712781 | orchestrator | 2026-04-11 00:47:39 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:39.712787 | orchestrator | 2026-04-11 00:47:39 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state STARTED 2026-04-11 00:47:39.712793 | orchestrator | 2026-04-11 00:47:39 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:39.713565 | orchestrator | 2026-04-11 00:47:39 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:39.713645 | orchestrator | 2026-04-11 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:42.764212 | orchestrator | 2026-04-11 00:47:42 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:42.764269 | orchestrator | 2026-04-11 00:47:42 | INFO  | Task c6e9b0c3-6e5f-47cd-a87a-3c079edb8892 is in state SUCCESS 2026-04-11 00:47:42.764275 | orchestrator | 2026-04-11 00:47:42 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:42.764280 | orchestrator | 2026-04-11 00:47:42 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:42.764285 | orchestrator | 2026-04-11 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:42.764946 | orchestrator | 2026-04-11 00:47:42.764987 | orchestrator | 2026-04-11 00:47:42.764996 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-11 00:47:42.765004 | orchestrator | 2026-04-11 00:47:42.765010 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-11 00:47:42.765017 | orchestrator | Saturday 11 April 2026 00:47:10 +0000 (0:00:00.138) 0:00:00.138 ******** 2026-04-11 00:47:42.765024 | orchestrator | ok: [localhost] => { 2026-04-11 00:47:42.765032 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-11 00:47:42.765038 | orchestrator | } 2026-04-11 00:47:42.765046 | orchestrator | 2026-04-11 00:47:42.765140 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-11 00:47:42.765148 | orchestrator | Saturday 11 April 2026 00:47:10 +0000 (0:00:00.164) 0:00:00.302 ******** 2026-04-11 00:47:42.765156 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-11 00:47:42.765163 | orchestrator | ...ignoring 2026-04-11 00:47:42.765170 | orchestrator | 2026-04-11 00:47:42.765177 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-11 00:47:42.765201 | orchestrator | Saturday 11 April 2026 00:47:13 +0000 (0:00:03.345) 0:00:03.648 ******** 2026-04-11 00:47:42.765208 | orchestrator | skipping: [localhost] 2026-04-11 00:47:42.765215 | orchestrator | 2026-04-11 00:47:42.765222 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-11 00:47:42.765228 | orchestrator | Saturday 11 April 2026 00:47:13 +0000 (0:00:00.068) 0:00:03.716 ******** 2026-04-11 00:47:42.765235 | orchestrator | ok: [localhost] 2026-04-11 00:47:42.765242 | orchestrator | 2026-04-11 00:47:42.765249 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:47:42.765255 | orchestrator | 2026-04-11 00:47:42.765261 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:47:42.765268 | orchestrator | Saturday 11 April 2026 00:47:14 +0000 (0:00:00.412) 0:00:04.129 ******** 2026-04-11 00:47:42.765274 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:42.765281 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:47:42.765287 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:47:42.765293 | orchestrator | 2026-04-11 00:47:42.765300 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:47:42.765307 | orchestrator | Saturday 11 April 2026 00:47:14 +0000 (0:00:00.424) 0:00:04.553 ******** 2026-04-11 00:47:42.765313 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-11 00:47:42.765320 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-11 00:47:42.765327 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-11 00:47:42.765333 | orchestrator | 2026-04-11 00:47:42.765340 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-11 00:47:42.765347 | orchestrator | 2026-04-11 00:47:42.765353 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 00:47:42.765359 | orchestrator | Saturday 11 April 2026 00:47:15 +0000 (0:00:00.724) 0:00:05.277 ******** 2026-04-11 00:47:42.765366 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:47:42.765373 | orchestrator | 2026-04-11 00:47:42.765379 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-11 00:47:42.765386 | orchestrator | Saturday 11 April 2026 00:47:16 +0000 (0:00:00.622) 0:00:05.900 ******** 2026-04-11 00:47:42.765392 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:42.765398 | orchestrator | 2026-04-11 00:47:42.765404 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-11 00:47:42.765411 | orchestrator | Saturday 11 April 2026 00:47:17 +0000 (0:00:01.380) 0:00:07.280 ******** 2026-04-11 00:47:42.765418 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.765425 | orchestrator | 2026-04-11 00:47:42.765432 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-11 00:47:42.765439 | orchestrator | Saturday 11 April 2026 00:47:17 +0000 (0:00:00.322) 0:00:07.603 ******** 2026-04-11 00:47:42.765446 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.765452 | orchestrator | 2026-04-11 00:47:42.765459 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-11 00:47:42.765466 | orchestrator | Saturday 11 April 2026 00:47:17 +0000 (0:00:00.235) 0:00:07.838 ******** 2026-04-11 00:47:42.765474 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.765480 | orchestrator | 2026-04-11 00:47:42.765488 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-11 00:47:42.765495 | orchestrator | Saturday 11 April 2026 00:47:18 +0000 (0:00:00.249) 0:00:08.088 ******** 2026-04-11 00:47:42.765502 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.765509 | orchestrator | 2026-04-11 00:47:42.765517 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 00:47:42.765524 | orchestrator | Saturday 11 April 2026 00:47:18 +0000 (0:00:00.234) 0:00:08.323 ******** 2026-04-11 00:47:42.765540 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:47:42.765555 | orchestrator | 2026-04-11 00:47:42.765562 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-11 00:47:42.765569 | orchestrator | Saturday 11 April 2026 00:47:19 +0000 (0:00:00.734) 0:00:09.058 ******** 2026-04-11 00:47:42.765576 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:47:42.765584 | orchestrator | 2026-04-11 00:47:42.765591 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-11 00:47:42.765598 | orchestrator | Saturday 11 April 2026 00:47:20 +0000 (0:00:00.853) 0:00:09.911 ******** 2026-04-11 00:47:42.765605 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.765631 | orchestrator | 2026-04-11 00:47:42.765638 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-11 00:47:42.765644 | orchestrator | Saturday 11 April 2026 00:47:20 +0000 (0:00:00.818) 0:00:10.729 ******** 2026-04-11 00:47:42.765650 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.765656 | orchestrator | 2026-04-11 00:47:42.765671 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-11 00:47:42.765677 | orchestrator | Saturday 11 April 2026 00:47:21 +0000 (0:00:00.318) 0:00:11.048 ******** 2026-04-11 00:47:42.765687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.765696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.765705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.765720 | orchestrator | 2026-04-11 00:47:42.765730 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-11 00:47:42.765736 | orchestrator | Saturday 11 April 2026 00:47:22 +0000 (0:00:01.266) 0:00:12.315 ******** 2026-04-11 00:47:42.765751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.765760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.765768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.765776 | orchestrator | 2026-04-11 00:47:42.765783 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-11 00:47:42.765795 | orchestrator | Saturday 11 April 2026 00:47:23 +0000 (0:00:01.492) 0:00:13.808 ******** 2026-04-11 00:47:42.765802 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 00:47:42.765810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 00:47:42.765817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 00:47:42.765825 | orchestrator | 2026-04-11 00:47:42.765832 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-11 00:47:42.765840 | orchestrator | Saturday 11 April 2026 00:47:25 +0000 (0:00:01.463) 0:00:15.271 ******** 2026-04-11 00:47:42.765849 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 00:47:42.765860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 00:47:42.765867 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 00:47:42.765877 | orchestrator | 2026-04-11 00:47:42.765885 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-11 00:47:42.765892 | orchestrator | Saturday 11 April 2026 00:47:27 +0000 (0:00:02.098) 0:00:17.370 ******** 2026-04-11 00:47:42.765899 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 00:47:42.765906 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 00:47:42.765912 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 00:47:42.765919 | orchestrator | 2026-04-11 00:47:42.765930 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-11 00:47:42.765937 | orchestrator | Saturday 11 April 2026 00:47:29 +0000 (0:00:01.569) 0:00:18.940 ******** 2026-04-11 00:47:42.765944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 00:47:42.765951 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 00:47:42.765957 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 00:47:42.765963 | orchestrator | 2026-04-11 00:47:42.765970 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-11 00:47:42.765976 | orchestrator | Saturday 11 April 2026 00:47:30 +0000 (0:00:01.306) 0:00:20.246 ******** 2026-04-11 00:47:42.765983 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 00:47:42.765990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 00:47:42.765996 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 00:47:42.766003 | orchestrator | 2026-04-11 00:47:42.766010 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-11 00:47:42.766056 | orchestrator | Saturday 11 April 2026 00:47:32 +0000 (0:00:01.605) 0:00:21.851 ******** 2026-04-11 00:47:42.766064 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 00:47:42.766071 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 00:47:42.766079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 00:47:42.766086 | orchestrator | 2026-04-11 00:47:42.766093 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 00:47:42.766100 | orchestrator | Saturday 11 April 2026 00:47:33 +0000 (0:00:01.254) 0:00:23.106 ******** 2026-04-11 00:47:42.766108 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:47:42.766121 | orchestrator | 2026-04-11 00:47:42.766183 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-11 00:47:42.766199 | orchestrator | Saturday 11 April 2026 00:47:34 +0000 (0:00:00.762) 0:00:23.869 ******** 2026-04-11 00:47:42.766207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.766218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.766234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.766242 | orchestrator | 2026-04-11 00:47:42.766248 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-11 00:47:42.766255 | orchestrator | Saturday 11 April 2026 00:47:35 +0000 (0:00:01.288) 0:00:25.157 ******** 2026-04-11 00:47:42.766261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766281 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.766288 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:42.766302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766310 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:42.766316 | orchestrator | 2026-04-11 00:47:42.766322 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-11 00:47:42.766329 | orchestrator | Saturday 11 April 2026 00:47:35 +0000 (0:00:00.356) 0:00:25.514 ******** 2026-04-11 00:47:42.766336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766353 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.766360 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:42.766370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766377 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:42.766384 | orchestrator | 2026-04-11 00:47:42.766390 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-11 00:47:42.766397 | orchestrator | Saturday 11 April 2026 00:47:36 +0000 (0:00:00.874) 0:00:26.389 ******** 2026-04-11 00:47:42.766408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.766421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.766429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:47:42.766436 | orchestrator | 2026-04-11 00:47:42.766443 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-11 00:47:42.766450 | orchestrator | Saturday 11 April 2026 00:47:37 +0000 (0:00:01.022) 0:00:27.411 ******** 2026-04-11 00:47:42.766457 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:47:42.766464 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:42.766473 | orchestrator | } 2026-04-11 00:47:42.766480 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:47:42.766486 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:42.766493 | orchestrator | } 2026-04-11 00:47:42.766499 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:47:42.766506 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:47:42.766513 | orchestrator | } 2026-04-11 00:47:42.766519 | orchestrator | 2026-04-11 00:47:42.766526 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:47:42.766532 | orchestrator | Saturday 11 April 2026 00:47:37 +0000 (0:00:00.373) 0:00:27.785 ******** 2026-04-11 00:47:42.766545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766566 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:47:42.766573 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:47:42.766580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:47:42.766588 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:47:42.766595 | orchestrator | 2026-04-11 00:47:42.766602 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-11 00:47:42.766679 | orchestrator | Saturday 11 April 2026 00:47:38 +0000 (0:00:00.633) 0:00:28.418 ******** 2026-04-11 00:47:42.766690 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:47:42.766698 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:47:42.766705 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:47:42.766712 | orchestrator | 2026-04-11 00:47:42.766719 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-11 00:47:42.766730 | orchestrator | Saturday 11 April 2026 00:47:39 +0000 (0:00:00.847) 0:00:29.266 ******** 2026-04-11 00:47:42.766746 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3c72ktg_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3c72ktg_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3c72ktg_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-11 00:47:42.766765 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_rc0_4ryu/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_rc0_4ryu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_rc0_4ryu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-11 00:47:42.766785 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_tudds1ud/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_tudds1ud/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_tudds1ud/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-11 00:47:42.766793 | orchestrator | 2026-04-11 00:47:42.766801 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:47:42.766808 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-11 00:47:42.766816 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-04-11 00:47:42.766823 | orchestrator | testbed-node-1 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-11 00:47:42.766831 | orchestrator | testbed-node-2 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-11 00:47:42.766837 | orchestrator | 2026-04-11 00:47:42.766843 | orchestrator | 2026-04-11 00:47:42.766850 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:47:42.766856 | orchestrator | Saturday 11 April 2026 00:47:40 +0000 (0:00:01.202) 0:00:30.469 ******** 2026-04-11 00:47:42.766863 | orchestrator | =============================================================================== 2026-04-11 00:47:42.766870 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.35s 2026-04-11 00:47:42.766876 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.10s 2026-04-11 00:47:42.766883 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.61s 2026-04-11 00:47:42.766893 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.57s 2026-04-11 00:47:42.766908 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.49s 2026-04-11 00:47:42.766915 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.46s 2026-04-11 00:47:42.766922 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.38s 2026-04-11 00:47:42.766929 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.31s 2026-04-11 00:47:42.766936 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.29s 2026-04-11 00:47:42.766943 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.27s 2026-04-11 00:47:42.766950 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.26s 2026-04-11 00:47:42.766961 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 1.20s 2026-04-11 00:47:42.766969 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.02s 2026-04-11 00:47:42.766977 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 0.87s 2026-04-11 00:47:42.766984 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.85s 2026-04-11 00:47:42.766991 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.85s 2026-04-11 00:47:42.766998 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 0.82s 2026-04-11 00:47:42.767005 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.76s 2026-04-11 00:47:42.767013 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.73s 2026-04-11 00:47:42.767020 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-04-11 00:47:45.812815 | orchestrator | 2026-04-11 00:47:45 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:45.816033 | orchestrator | 2026-04-11 00:47:45 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:45.818444 | orchestrator | 2026-04-11 00:47:45 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:45.818509 | orchestrator | 2026-04-11 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:48.850242 | orchestrator | 2026-04-11 00:47:48 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:48.850628 | orchestrator | 2026-04-11 00:47:48 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:48.851349 | orchestrator | 2026-04-11 00:47:48 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:48.851386 | orchestrator | 2026-04-11 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:51.890357 | orchestrator | 2026-04-11 00:47:51 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:51.894002 | orchestrator | 2026-04-11 00:47:51 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:51.895346 | orchestrator | 2026-04-11 00:47:51 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:51.896323 | orchestrator | 2026-04-11 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:54.924368 | orchestrator | 2026-04-11 00:47:54 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:54.925667 | orchestrator | 2026-04-11 00:47:54 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:54.926553 | orchestrator | 2026-04-11 00:47:54 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:54.926583 | orchestrator | 2026-04-11 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:47:57.965281 | orchestrator | 2026-04-11 00:47:57 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:47:57.966285 | orchestrator | 2026-04-11 00:47:57 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:47:57.967343 | orchestrator | 2026-04-11 00:47:57 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:47:57.967383 | orchestrator | 2026-04-11 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:01.059762 | orchestrator | 2026-04-11 00:48:01 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:01.060685 | orchestrator | 2026-04-11 00:48:01 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:01.061843 | orchestrator | 2026-04-11 00:48:01 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:01.062049 | orchestrator | 2026-04-11 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:04.101943 | orchestrator | 2026-04-11 00:48:04 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:04.102060 | orchestrator | 2026-04-11 00:48:04 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:04.102624 | orchestrator | 2026-04-11 00:48:04 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:04.102655 | orchestrator | 2026-04-11 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:07.147105 | orchestrator | 2026-04-11 00:48:07 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:07.150453 | orchestrator | 2026-04-11 00:48:07 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:07.153654 | orchestrator | 2026-04-11 00:48:07 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:07.153746 | orchestrator | 2026-04-11 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:10.193733 | orchestrator | 2026-04-11 00:48:10 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:10.193861 | orchestrator | 2026-04-11 00:48:10 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:10.194597 | orchestrator | 2026-04-11 00:48:10 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:10.194622 | orchestrator | 2026-04-11 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:13.238537 | orchestrator | 2026-04-11 00:48:13 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:13.240521 | orchestrator | 2026-04-11 00:48:13 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:13.242462 | orchestrator | 2026-04-11 00:48:13 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:13.242743 | orchestrator | 2026-04-11 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:16.295614 | orchestrator | 2026-04-11 00:48:16 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:16.296848 | orchestrator | 2026-04-11 00:48:16 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:16.296896 | orchestrator | 2026-04-11 00:48:16 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:16.296904 | orchestrator | 2026-04-11 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:19.334070 | orchestrator | 2026-04-11 00:48:19 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:19.334235 | orchestrator | 2026-04-11 00:48:19 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:19.334815 | orchestrator | 2026-04-11 00:48:19 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:19.334835 | orchestrator | 2026-04-11 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:22.373293 | orchestrator | 2026-04-11 00:48:22 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:22.373382 | orchestrator | 2026-04-11 00:48:22 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:22.373393 | orchestrator | 2026-04-11 00:48:22 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:22.373400 | orchestrator | 2026-04-11 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:25.410214 | orchestrator | 2026-04-11 00:48:25 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:25.412066 | orchestrator | 2026-04-11 00:48:25 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:25.413610 | orchestrator | 2026-04-11 00:48:25 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:25.413652 | orchestrator | 2026-04-11 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:28.443394 | orchestrator | 2026-04-11 00:48:28 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:28.443914 | orchestrator | 2026-04-11 00:48:28 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:28.444788 | orchestrator | 2026-04-11 00:48:28 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:28.444816 | orchestrator | 2026-04-11 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:31.475981 | orchestrator | 2026-04-11 00:48:31 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:31.476577 | orchestrator | 2026-04-11 00:48:31 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:31.477062 | orchestrator | 2026-04-11 00:48:31 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:31.477092 | orchestrator | 2026-04-11 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:34.510752 | orchestrator | 2026-04-11 00:48:34 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:34.511914 | orchestrator | 2026-04-11 00:48:34 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:34.512636 | orchestrator | 2026-04-11 00:48:34 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:34.512687 | orchestrator | 2026-04-11 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:37.563937 | orchestrator | 2026-04-11 00:48:37 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:37.563994 | orchestrator | 2026-04-11 00:48:37 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:37.565148 | orchestrator | 2026-04-11 00:48:37 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:37.565171 | orchestrator | 2026-04-11 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:40.600726 | orchestrator | 2026-04-11 00:48:40 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:40.603042 | orchestrator | 2026-04-11 00:48:40 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:40.604768 | orchestrator | 2026-04-11 00:48:40 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:40.604822 | orchestrator | 2026-04-11 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:43.629270 | orchestrator | 2026-04-11 00:48:43 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:43.636577 | orchestrator | 2026-04-11 00:48:43 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:43.637119 | orchestrator | 2026-04-11 00:48:43 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:43.637136 | orchestrator | 2026-04-11 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:46.711753 | orchestrator | 2026-04-11 00:48:46 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:46.711806 | orchestrator | 2026-04-11 00:48:46 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:46.711813 | orchestrator | 2026-04-11 00:48:46 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:46.711821 | orchestrator | 2026-04-11 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:49.754066 | orchestrator | 2026-04-11 00:48:49 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:49.755426 | orchestrator | 2026-04-11 00:48:49 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:49.756873 | orchestrator | 2026-04-11 00:48:49 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:49.757017 | orchestrator | 2026-04-11 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:52.812265 | orchestrator | 2026-04-11 00:48:52 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:52.813661 | orchestrator | 2026-04-11 00:48:52 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:52.816897 | orchestrator | 2026-04-11 00:48:52 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:52.816945 | orchestrator | 2026-04-11 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:55.858156 | orchestrator | 2026-04-11 00:48:55 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:55.859763 | orchestrator | 2026-04-11 00:48:55 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:55.859816 | orchestrator | 2026-04-11 00:48:55 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:55.860025 | orchestrator | 2026-04-11 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:48:58.893976 | orchestrator | 2026-04-11 00:48:58 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:48:58.895880 | orchestrator | 2026-04-11 00:48:58 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:48:58.898240 | orchestrator | 2026-04-11 00:48:58 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:48:58.898300 | orchestrator | 2026-04-11 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:01.948713 | orchestrator | 2026-04-11 00:49:01 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:01.950786 | orchestrator | 2026-04-11 00:49:01 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:01.952347 | orchestrator | 2026-04-11 00:49:01 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:01.952419 | orchestrator | 2026-04-11 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:04.999104 | orchestrator | 2026-04-11 00:49:04 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:05.001860 | orchestrator | 2026-04-11 00:49:05 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:05.003402 | orchestrator | 2026-04-11 00:49:05 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:05.003446 | orchestrator | 2026-04-11 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:08.052627 | orchestrator | 2026-04-11 00:49:08 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:08.052677 | orchestrator | 2026-04-11 00:49:08 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:08.053186 | orchestrator | 2026-04-11 00:49:08 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:08.053221 | orchestrator | 2026-04-11 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:11.088225 | orchestrator | 2026-04-11 00:49:11 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:11.090617 | orchestrator | 2026-04-11 00:49:11 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:11.092972 | orchestrator | 2026-04-11 00:49:11 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:11.093017 | orchestrator | 2026-04-11 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:14.122549 | orchestrator | 2026-04-11 00:49:14 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:14.125500 | orchestrator | 2026-04-11 00:49:14 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:14.127369 | orchestrator | 2026-04-11 00:49:14 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:14.127465 | orchestrator | 2026-04-11 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:17.160408 | orchestrator | 2026-04-11 00:49:17 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:17.161054 | orchestrator | 2026-04-11 00:49:17 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:17.161714 | orchestrator | 2026-04-11 00:49:17 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:17.161749 | orchestrator | 2026-04-11 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:20.187861 | orchestrator | 2026-04-11 00:49:20 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:20.187934 | orchestrator | 2026-04-11 00:49:20 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:20.188648 | orchestrator | 2026-04-11 00:49:20 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:20.188672 | orchestrator | 2026-04-11 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:23.226789 | orchestrator | 2026-04-11 00:49:23 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:23.228354 | orchestrator | 2026-04-11 00:49:23 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:23.230136 | orchestrator | 2026-04-11 00:49:23 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:23.230350 | orchestrator | 2026-04-11 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:26.267983 | orchestrator | 2026-04-11 00:49:26 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:26.268296 | orchestrator | 2026-04-11 00:49:26 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:26.269041 | orchestrator | 2026-04-11 00:49:26 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:26.269078 | orchestrator | 2026-04-11 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:29.324261 | orchestrator | 2026-04-11 00:49:29 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:29.325123 | orchestrator | 2026-04-11 00:49:29 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:29.325715 | orchestrator | 2026-04-11 00:49:29 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:29.325748 | orchestrator | 2026-04-11 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:32.366233 | orchestrator | 2026-04-11 00:49:32 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:32.366788 | orchestrator | 2026-04-11 00:49:32 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:32.367718 | orchestrator | 2026-04-11 00:49:32 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:32.367848 | orchestrator | 2026-04-11 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:35.497224 | orchestrator | 2026-04-11 00:49:35 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:35.502051 | orchestrator | 2026-04-11 00:49:35 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:35.502109 | orchestrator | 2026-04-11 00:49:35 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:35.502115 | orchestrator | 2026-04-11 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:38.536277 | orchestrator | 2026-04-11 00:49:38 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:38.537679 | orchestrator | 2026-04-11 00:49:38 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:38.538524 | orchestrator | 2026-04-11 00:49:38 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:38.538555 | orchestrator | 2026-04-11 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:41.579626 | orchestrator | 2026-04-11 00:49:41 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:41.582690 | orchestrator | 2026-04-11 00:49:41 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:41.585498 | orchestrator | 2026-04-11 00:49:41 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:41.586547 | orchestrator | 2026-04-11 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:44.628733 | orchestrator | 2026-04-11 00:49:44 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:44.629806 | orchestrator | 2026-04-11 00:49:44 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:44.630761 | orchestrator | 2026-04-11 00:49:44 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:44.631039 | orchestrator | 2026-04-11 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:47.691956 | orchestrator | 2026-04-11 00:49:47 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:47.692197 | orchestrator | 2026-04-11 00:49:47 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:47.693228 | orchestrator | 2026-04-11 00:49:47 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:47.693308 | orchestrator | 2026-04-11 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:50.818944 | orchestrator | 2026-04-11 00:49:50 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:50.819026 | orchestrator | 2026-04-11 00:49:50 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:50.819790 | orchestrator | 2026-04-11 00:49:50 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:50.819865 | orchestrator | 2026-04-11 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:53.852654 | orchestrator | 2026-04-11 00:49:53 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:53.853394 | orchestrator | 2026-04-11 00:49:53 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:53.853801 | orchestrator | 2026-04-11 00:49:53 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state STARTED 2026-04-11 00:49:53.853821 | orchestrator | 2026-04-11 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:56.895928 | orchestrator | 2026-04-11 00:49:56 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:56.897299 | orchestrator | 2026-04-11 00:49:56 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:56.898961 | orchestrator | 2026-04-11 00:49:56 | INFO  | Task 817ff524-dc9b-41d6-8067-e2b333f91c48 is in state STARTED 2026-04-11 00:49:56.901911 | orchestrator | 2026-04-11 00:49:56 | INFO  | Task 64f050ab-df7e-4127-9323-cab2e2ac80f1 is in state SUCCESS 2026-04-11 00:49:56.902850 | orchestrator | 2026-04-11 00:49:56.902883 | orchestrator | 2026-04-11 00:49:56.902892 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-11 00:49:56.902901 | orchestrator | 2026-04-11 00:49:56.902908 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-11 00:49:56.902916 | orchestrator | Saturday 11 April 2026 00:45:37 +0000 (0:00:00.264) 0:00:00.264 ******** 2026-04-11 00:49:56.902923 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:49:56.902931 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:49:56.902939 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:49:56.902946 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.902953 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.902960 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.902968 | orchestrator | 2026-04-11 00:49:56.902975 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-11 00:49:56.902983 | orchestrator | Saturday 11 April 2026 00:45:37 +0000 (0:00:00.538) 0:00:00.803 ******** 2026-04-11 00:49:56.903038 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.903048 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.903055 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.903063 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.903070 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.903077 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.903085 | orchestrator | 2026-04-11 00:49:56.903092 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-11 00:49:56.903099 | orchestrator | Saturday 11 April 2026 00:45:38 +0000 (0:00:00.695) 0:00:01.499 ******** 2026-04-11 00:49:56.903107 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.903130 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.903138 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.903148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.903161 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.903174 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.903192 | orchestrator | 2026-04-11 00:49:56.903207 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-11 00:49:56.903220 | orchestrator | Saturday 11 April 2026 00:45:38 +0000 (0:00:00.542) 0:00:02.042 ******** 2026-04-11 00:49:56.903233 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:49:56.903246 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:49:56.903258 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:49:56.903270 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.903284 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.903298 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.903310 | orchestrator | 2026-04-11 00:49:56.903324 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-11 00:49:56.903337 | orchestrator | Saturday 11 April 2026 00:45:41 +0000 (0:00:02.941) 0:00:04.983 ******** 2026-04-11 00:49:56.903350 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:49:56.903363 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:49:56.903376 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:49:56.903390 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.903403 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.903416 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.903427 | orchestrator | 2026-04-11 00:49:56.903435 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-11 00:49:56.903442 | orchestrator | Saturday 11 April 2026 00:45:43 +0000 (0:00:02.094) 0:00:07.077 ******** 2026-04-11 00:49:56.903449 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:49:56.903457 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:49:56.903502 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:49:56.903510 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.903517 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.903524 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.903531 | orchestrator | 2026-04-11 00:49:56.903539 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-11 00:49:56.903546 | orchestrator | Saturday 11 April 2026 00:45:46 +0000 (0:00:02.329) 0:00:09.407 ******** 2026-04-11 00:49:56.903554 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.903561 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.903568 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.903575 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.903583 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.903590 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.903597 | orchestrator | 2026-04-11 00:49:56.903604 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-11 00:49:56.903612 | orchestrator | Saturday 11 April 2026 00:45:47 +0000 (0:00:00.930) 0:00:10.337 ******** 2026-04-11 00:49:56.903619 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.903626 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.903633 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.903641 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.903648 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.903665 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.903674 | orchestrator | 2026-04-11 00:49:56.903687 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-11 00:49:56.903708 | orchestrator | Saturday 11 April 2026 00:45:47 +0000 (0:00:00.525) 0:00:10.863 ******** 2026-04-11 00:49:56.903720 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 00:49:56.903732 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 00:49:56.903760 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.903776 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 00:49:56.903790 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 00:49:56.903803 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 00:49:56.903816 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 00:49:56.903828 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.903836 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 00:49:56.903845 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 00:49:56.903865 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.903874 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 00:49:56.903882 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 00:49:56.903891 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.903899 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.903907 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 00:49:56.903915 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 00:49:56.903923 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.903931 | orchestrator | 2026-04-11 00:49:56.903940 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-11 00:49:56.903948 | orchestrator | Saturday 11 April 2026 00:45:48 +0000 (0:00:00.830) 0:00:11.693 ******** 2026-04-11 00:49:56.903957 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.903965 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.903973 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.903982 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.903990 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.903997 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.904004 | orchestrator | 2026-04-11 00:49:56.904011 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-11 00:49:56.904020 | orchestrator | Saturday 11 April 2026 00:45:49 +0000 (0:00:01.429) 0:00:13.123 ******** 2026-04-11 00:49:56.904027 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:49:56.904034 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:49:56.904041 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:49:56.904049 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.904056 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.904063 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.904070 | orchestrator | 2026-04-11 00:49:56.904077 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-11 00:49:56.904084 | orchestrator | Saturday 11 April 2026 00:45:50 +0000 (0:00:00.738) 0:00:13.862 ******** 2026-04-11 00:49:56.904092 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:49:56.904099 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:49:56.904106 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.904113 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.904120 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.904127 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:49:56.904135 | orchestrator | 2026-04-11 00:49:56.904142 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-11 00:49:56.904149 | orchestrator | Saturday 11 April 2026 00:45:57 +0000 (0:00:06.500) 0:00:20.362 ******** 2026-04-11 00:49:56.904156 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.904163 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.904171 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.904178 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.904185 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.904192 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.904204 | orchestrator | 2026-04-11 00:49:56.904212 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-11 00:49:56.904219 | orchestrator | Saturday 11 April 2026 00:45:58 +0000 (0:00:00.900) 0:00:21.263 ******** 2026-04-11 00:49:56.904226 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.904233 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.904240 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.904247 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.904255 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.904262 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.904356 | orchestrator | 2026-04-11 00:49:56.904366 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-11 00:49:56.904375 | orchestrator | Saturday 11 April 2026 00:45:59 +0000 (0:00:01.583) 0:00:22.847 ******** 2026-04-11 00:49:56.904384 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.904396 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.904407 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.904419 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.904431 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.904444 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.904457 | orchestrator | 2026-04-11 00:49:56.904520 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-11 00:49:56.904529 | orchestrator | Saturday 11 April 2026 00:46:00 +0000 (0:00:01.212) 0:00:24.059 ******** 2026-04-11 00:49:56.904536 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-11 00:49:56.904543 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-11 00:49:56.904557 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.904564 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-11 00:49:56.904572 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-11 00:49:56.904579 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.904586 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-11 00:49:56.904593 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-11 00:49:56.904601 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.904608 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-11 00:49:56.904615 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-11 00:49:56.904622 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.904630 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-11 00:49:56.904637 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-11 00:49:56.904644 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.904651 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-11 00:49:56.904658 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-11 00:49:56.904666 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.904673 | orchestrator | 2026-04-11 00:49:56.904680 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-11 00:49:56.904695 | orchestrator | Saturday 11 April 2026 00:46:02 +0000 (0:00:01.370) 0:00:25.429 ******** 2026-04-11 00:49:56.904703 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.904710 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.904717 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.904725 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.904732 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.904739 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.904746 | orchestrator | 2026-04-11 00:49:56.904753 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-11 00:49:56.904760 | orchestrator | Saturday 11 April 2026 00:46:03 +0000 (0:00:01.084) 0:00:26.514 ******** 2026-04-11 00:49:56.904768 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.904782 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.904789 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.904796 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.904803 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.904811 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.904818 | orchestrator | 2026-04-11 00:49:56.904825 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-11 00:49:56.904832 | orchestrator | 2026-04-11 00:49:56.904839 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-11 00:49:56.904847 | orchestrator | Saturday 11 April 2026 00:46:04 +0000 (0:00:01.143) 0:00:27.658 ******** 2026-04-11 00:49:56.904854 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.904861 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.904868 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.904876 | orchestrator | 2026-04-11 00:49:56.904883 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-11 00:49:56.904890 | orchestrator | Saturday 11 April 2026 00:46:05 +0000 (0:00:01.041) 0:00:28.699 ******** 2026-04-11 00:49:56.904898 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.904905 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.904912 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.904919 | orchestrator | 2026-04-11 00:49:56.904926 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-11 00:49:56.904934 | orchestrator | Saturday 11 April 2026 00:46:07 +0000 (0:00:01.863) 0:00:30.563 ******** 2026-04-11 00:49:56.904941 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.904948 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.904955 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.904962 | orchestrator | 2026-04-11 00:49:56.904969 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-11 00:49:56.904977 | orchestrator | Saturday 11 April 2026 00:46:08 +0000 (0:00:00.959) 0:00:31.523 ******** 2026-04-11 00:49:56.904984 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.904992 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.905000 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.905008 | orchestrator | 2026-04-11 00:49:56.905016 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-11 00:49:56.905025 | orchestrator | Saturday 11 April 2026 00:46:09 +0000 (0:00:00.993) 0:00:32.517 ******** 2026-04-11 00:49:56.905032 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.905040 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.905048 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.905055 | orchestrator | 2026-04-11 00:49:56.905063 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-11 00:49:56.905070 | orchestrator | Saturday 11 April 2026 00:46:09 +0000 (0:00:00.350) 0:00:32.867 ******** 2026-04-11 00:49:56.905078 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.905086 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905093 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.905101 | orchestrator | 2026-04-11 00:49:56.905108 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-11 00:49:56.905116 | orchestrator | Saturday 11 April 2026 00:46:10 +0000 (0:00:00.966) 0:00:33.834 ******** 2026-04-11 00:49:56.905123 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905130 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.905137 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.905145 | orchestrator | 2026-04-11 00:49:56.905153 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-11 00:49:56.905160 | orchestrator | Saturday 11 April 2026 00:46:11 +0000 (0:00:01.349) 0:00:35.183 ******** 2026-04-11 00:49:56.905168 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:49:56.905175 | orchestrator | 2026-04-11 00:49:56.905183 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-11 00:49:56.905194 | orchestrator | Saturday 11 April 2026 00:46:12 +0000 (0:00:00.796) 0:00:35.980 ******** 2026-04-11 00:49:56.905202 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.905209 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.905217 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.905224 | orchestrator | 2026-04-11 00:49:56.905232 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-11 00:49:56.905239 | orchestrator | Saturday 11 April 2026 00:46:16 +0000 (0:00:03.538) 0:00:39.518 ******** 2026-04-11 00:49:56.905247 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.905254 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.905262 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905269 | orchestrator | 2026-04-11 00:49:56.905276 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-11 00:49:56.905284 | orchestrator | Saturday 11 April 2026 00:46:16 +0000 (0:00:00.618) 0:00:40.137 ******** 2026-04-11 00:49:56.905292 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.905299 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.905306 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905314 | orchestrator | 2026-04-11 00:49:56.905321 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-11 00:49:56.905329 | orchestrator | Saturday 11 April 2026 00:46:18 +0000 (0:00:01.311) 0:00:41.449 ******** 2026-04-11 00:49:56.905337 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.905345 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.905352 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905359 | orchestrator | 2026-04-11 00:49:56.905366 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-11 00:49:56.905377 | orchestrator | Saturday 11 April 2026 00:46:20 +0000 (0:00:02.335) 0:00:43.785 ******** 2026-04-11 00:49:56.905389 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.905404 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.905420 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.905432 | orchestrator | 2026-04-11 00:49:56.905443 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-11 00:49:56.905454 | orchestrator | Saturday 11 April 2026 00:46:21 +0000 (0:00:00.612) 0:00:44.397 ******** 2026-04-11 00:49:56.905479 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.905490 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.905500 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.905511 | orchestrator | 2026-04-11 00:49:56.905522 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-11 00:49:56.905534 | orchestrator | Saturday 11 April 2026 00:46:21 +0000 (0:00:00.725) 0:00:45.123 ******** 2026-04-11 00:49:56.905546 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905558 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.905569 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.905580 | orchestrator | 2026-04-11 00:49:56.905588 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-11 00:49:56.905595 | orchestrator | Saturday 11 April 2026 00:46:24 +0000 (0:00:02.200) 0:00:47.324 ******** 2026-04-11 00:49:56.905601 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.905608 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.905615 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.905621 | orchestrator | 2026-04-11 00:49:56.905628 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-11 00:49:56.905635 | orchestrator | Saturday 11 April 2026 00:46:26 +0000 (0:00:02.085) 0:00:49.410 ******** 2026-04-11 00:49:56.905641 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.905648 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.905655 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.905661 | orchestrator | 2026-04-11 00:49:56.905668 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-11 00:49:56.905675 | orchestrator | Saturday 11 April 2026 00:46:27 +0000 (0:00:01.020) 0:00:50.431 ******** 2026-04-11 00:49:56.905689 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 00:49:56.905696 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 00:49:56.905726 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 00:49:56.905733 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 00:49:56.905740 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 00:49:56.905747 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 00:49:56.905754 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-11 00:49:56.905760 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-11 00:49:56.905767 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-11 00:49:56.905773 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-11 00:49:56.905780 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-11 00:49:56.905787 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-11 00:49:56.905797 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.905804 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.905810 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.905817 | orchestrator | 2026-04-11 00:49:56.905824 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-11 00:49:56.905830 | orchestrator | Saturday 11 April 2026 00:47:10 +0000 (0:00:43.333) 0:01:33.765 ******** 2026-04-11 00:49:56.905837 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.905844 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.905850 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.905857 | orchestrator | 2026-04-11 00:49:56.905864 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-11 00:49:56.905870 | orchestrator | Saturday 11 April 2026 00:47:10 +0000 (0:00:00.415) 0:01:34.180 ******** 2026-04-11 00:49:56.905877 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.905884 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.905890 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905897 | orchestrator | 2026-04-11 00:49:56.905904 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-11 00:49:56.905910 | orchestrator | Saturday 11 April 2026 00:47:13 +0000 (0:00:02.094) 0:01:36.275 ******** 2026-04-11 00:49:56.905917 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.905924 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905930 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.905937 | orchestrator | 2026-04-11 00:49:56.905949 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-11 00:49:56.905956 | orchestrator | Saturday 11 April 2026 00:47:14 +0000 (0:00:01.667) 0:01:37.943 ******** 2026-04-11 00:49:56.905963 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.905970 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.905976 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.905987 | orchestrator | 2026-04-11 00:49:56.905994 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-11 00:49:56.906000 | orchestrator | Saturday 11 April 2026 00:47:41 +0000 (0:00:26.949) 0:02:04.893 ******** 2026-04-11 00:49:56.906007 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.906065 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.906074 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.906081 | orchestrator | 2026-04-11 00:49:56.906088 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-11 00:49:56.906094 | orchestrator | Saturday 11 April 2026 00:47:42 +0000 (0:00:00.619) 0:02:05.512 ******** 2026-04-11 00:49:56.906101 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.906108 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.906114 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.906121 | orchestrator | 2026-04-11 00:49:56.906127 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-11 00:49:56.906134 | orchestrator | Saturday 11 April 2026 00:47:42 +0000 (0:00:00.722) 0:02:06.234 ******** 2026-04-11 00:49:56.906141 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.906147 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.906154 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.906161 | orchestrator | 2026-04-11 00:49:56.906167 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-11 00:49:56.906174 | orchestrator | Saturday 11 April 2026 00:47:43 +0000 (0:00:00.618) 0:02:06.852 ******** 2026-04-11 00:49:56.906181 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.906187 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.906194 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.906201 | orchestrator | 2026-04-11 00:49:56.906207 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-11 00:49:56.906214 | orchestrator | Saturday 11 April 2026 00:47:44 +0000 (0:00:00.686) 0:02:07.539 ******** 2026-04-11 00:49:56.906221 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.906227 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.906234 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.906240 | orchestrator | 2026-04-11 00:49:56.906247 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-11 00:49:56.906254 | orchestrator | Saturday 11 April 2026 00:47:44 +0000 (0:00:00.346) 0:02:07.885 ******** 2026-04-11 00:49:56.906260 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.906267 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.906274 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.906280 | orchestrator | 2026-04-11 00:49:56.906287 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-11 00:49:56.906294 | orchestrator | Saturday 11 April 2026 00:47:45 +0000 (0:00:00.574) 0:02:08.460 ******** 2026-04-11 00:49:56.906301 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.906307 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.906314 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.906320 | orchestrator | 2026-04-11 00:49:56.906327 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-11 00:49:56.906334 | orchestrator | Saturday 11 April 2026 00:47:45 +0000 (0:00:00.723) 0:02:09.184 ******** 2026-04-11 00:49:56.906340 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.906347 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.906354 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.906360 | orchestrator | 2026-04-11 00:49:56.906367 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-11 00:49:56.906374 | orchestrator | Saturday 11 April 2026 00:47:46 +0000 (0:00:00.763) 0:02:09.947 ******** 2026-04-11 00:49:56.906380 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:49:56.906387 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:49:56.906394 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:49:56.906400 | orchestrator | 2026-04-11 00:49:56.906407 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-11 00:49:56.906418 | orchestrator | Saturday 11 April 2026 00:47:47 +0000 (0:00:00.730) 0:02:10.677 ******** 2026-04-11 00:49:56.906425 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.906432 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.906438 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.906445 | orchestrator | 2026-04-11 00:49:56.906451 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-11 00:49:56.906458 | orchestrator | Saturday 11 April 2026 00:47:47 +0000 (0:00:00.276) 0:02:10.954 ******** 2026-04-11 00:49:56.906483 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.906494 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.906501 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.906507 | orchestrator | 2026-04-11 00:49:56.906514 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-11 00:49:56.906521 | orchestrator | Saturday 11 April 2026 00:47:48 +0000 (0:00:00.416) 0:02:11.371 ******** 2026-04-11 00:49:56.906527 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.906534 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.906541 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.906547 | orchestrator | 2026-04-11 00:49:56.906554 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-11 00:49:56.906561 | orchestrator | Saturday 11 April 2026 00:47:48 +0000 (0:00:00.666) 0:02:12.037 ******** 2026-04-11 00:49:56.906567 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.906574 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.906580 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.906587 | orchestrator | 2026-04-11 00:49:56.906594 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-11 00:49:56.906601 | orchestrator | Saturday 11 April 2026 00:47:49 +0000 (0:00:00.641) 0:02:12.679 ******** 2026-04-11 00:49:56.906607 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 00:49:56.906619 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 00:49:56.906626 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 00:49:56.906633 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 00:49:56.906640 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 00:49:56.906646 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 00:49:56.906653 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 00:49:56.906660 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 00:49:56.906666 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 00:49:56.906673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-11 00:49:56.906680 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 00:49:56.906686 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 00:49:56.906693 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-11 00:49:56.906699 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 00:49:56.906706 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 00:49:56.906713 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 00:49:56.906719 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 00:49:56.906731 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 00:49:56.906738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 00:49:56.906744 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 00:49:56.906751 | orchestrator | 2026-04-11 00:49:56.906757 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-11 00:49:56.906764 | orchestrator | 2026-04-11 00:49:56.906771 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-11 00:49:56.906777 | orchestrator | Saturday 11 April 2026 00:47:53 +0000 (0:00:03.608) 0:02:16.288 ******** 2026-04-11 00:49:56.906784 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:49:56.906791 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:49:56.906797 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:49:56.906804 | orchestrator | 2026-04-11 00:49:56.906811 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-11 00:49:56.906817 | orchestrator | Saturday 11 April 2026 00:47:53 +0000 (0:00:00.297) 0:02:16.585 ******** 2026-04-11 00:49:56.906824 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:49:56.906831 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:49:56.906837 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:49:56.906844 | orchestrator | 2026-04-11 00:49:56.906851 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-11 00:49:56.906857 | orchestrator | Saturday 11 April 2026 00:47:53 +0000 (0:00:00.628) 0:02:17.214 ******** 2026-04-11 00:49:56.906864 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:49:56.906871 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:49:56.906877 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:49:56.906884 | orchestrator | 2026-04-11 00:49:56.906890 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-11 00:49:56.906897 | orchestrator | Saturday 11 April 2026 00:47:54 +0000 (0:00:00.308) 0:02:17.523 ******** 2026-04-11 00:49:56.906904 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:49:56.906911 | orchestrator | 2026-04-11 00:49:56.906917 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-11 00:49:56.906924 | orchestrator | Saturday 11 April 2026 00:47:54 +0000 (0:00:00.559) 0:02:18.083 ******** 2026-04-11 00:49:56.906937 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.906944 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.906951 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.906958 | orchestrator | 2026-04-11 00:49:56.906964 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-11 00:49:56.906971 | orchestrator | Saturday 11 April 2026 00:47:55 +0000 (0:00:00.279) 0:02:18.362 ******** 2026-04-11 00:49:56.906978 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.906985 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.906991 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.906998 | orchestrator | 2026-04-11 00:49:56.907005 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-11 00:49:56.907011 | orchestrator | Saturday 11 April 2026 00:47:55 +0000 (0:00:00.266) 0:02:18.629 ******** 2026-04-11 00:49:56.907018 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.907024 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.907031 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.907038 | orchestrator | 2026-04-11 00:49:56.907044 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-11 00:49:56.907051 | orchestrator | Saturday 11 April 2026 00:47:55 +0000 (0:00:00.380) 0:02:19.009 ******** 2026-04-11 00:49:56.907058 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:49:56.907064 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:49:56.907071 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:49:56.907078 | orchestrator | 2026-04-11 00:49:56.907101 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-11 00:49:56.907108 | orchestrator | Saturday 11 April 2026 00:47:56 +0000 (0:00:00.621) 0:02:19.631 ******** 2026-04-11 00:49:56.907115 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:49:56.907122 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:49:56.907128 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:49:56.907135 | orchestrator | 2026-04-11 00:49:56.907142 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-11 00:49:56.907149 | orchestrator | Saturday 11 April 2026 00:47:57 +0000 (0:00:01.339) 0:02:20.970 ******** 2026-04-11 00:49:56.907155 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:49:56.907162 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:49:56.907168 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:49:56.907175 | orchestrator | 2026-04-11 00:49:56.907182 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-11 00:49:56.907188 | orchestrator | Saturday 11 April 2026 00:47:58 +0000 (0:00:01.106) 0:02:22.076 ******** 2026-04-11 00:49:56.907195 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:49:56.907202 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:49:56.907208 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:49:56.907215 | orchestrator | 2026-04-11 00:49:56.907221 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-11 00:49:56.907228 | orchestrator | 2026-04-11 00:49:56.907235 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-11 00:49:56.907242 | orchestrator | Saturday 11 April 2026 00:48:10 +0000 (0:00:11.242) 0:02:33.318 ******** 2026-04-11 00:49:56.907248 | orchestrator | ok: [testbed-manager] 2026-04-11 00:49:56.907255 | orchestrator | 2026-04-11 00:49:56.907262 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-11 00:49:56.907268 | orchestrator | Saturday 11 April 2026 00:48:10 +0000 (0:00:00.689) 0:02:34.007 ******** 2026-04-11 00:49:56.907275 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.907282 | orchestrator | 2026-04-11 00:49:56.907288 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-11 00:49:56.907295 | orchestrator | Saturday 11 April 2026 00:48:11 +0000 (0:00:00.450) 0:02:34.458 ******** 2026-04-11 00:49:56.907301 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-11 00:49:56.907308 | orchestrator | 2026-04-11 00:49:56.907315 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-11 00:49:56.907321 | orchestrator | Saturday 11 April 2026 00:48:11 +0000 (0:00:00.579) 0:02:35.038 ******** 2026-04-11 00:49:56.907328 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.907335 | orchestrator | 2026-04-11 00:49:56.907347 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-11 00:49:56.907362 | orchestrator | Saturday 11 April 2026 00:48:12 +0000 (0:00:00.758) 0:02:35.797 ******** 2026-04-11 00:49:56.907375 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.907386 | orchestrator | 2026-04-11 00:49:56.907396 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-11 00:49:56.907406 | orchestrator | Saturday 11 April 2026 00:48:13 +0000 (0:00:00.547) 0:02:36.345 ******** 2026-04-11 00:49:56.907417 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 00:49:56.907427 | orchestrator | 2026-04-11 00:49:56.907437 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-11 00:49:56.907449 | orchestrator | Saturday 11 April 2026 00:48:14 +0000 (0:00:01.843) 0:02:38.188 ******** 2026-04-11 00:49:56.907459 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 00:49:56.907495 | orchestrator | 2026-04-11 00:49:56.907507 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-11 00:49:56.907519 | orchestrator | Saturday 11 April 2026 00:48:15 +0000 (0:00:00.855) 0:02:39.044 ******** 2026-04-11 00:49:56.907531 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.907542 | orchestrator | 2026-04-11 00:49:56.907551 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-11 00:49:56.907565 | orchestrator | Saturday 11 April 2026 00:48:16 +0000 (0:00:00.374) 0:02:39.418 ******** 2026-04-11 00:49:56.907572 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.907578 | orchestrator | 2026-04-11 00:49:56.907585 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-11 00:49:56.907591 | orchestrator | 2026-04-11 00:49:56.907598 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-11 00:49:56.907605 | orchestrator | Saturday 11 April 2026 00:48:16 +0000 (0:00:00.368) 0:02:39.787 ******** 2026-04-11 00:49:56.907611 | orchestrator | ok: [testbed-manager] 2026-04-11 00:49:56.907618 | orchestrator | 2026-04-11 00:49:56.907625 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-11 00:49:56.907637 | orchestrator | Saturday 11 April 2026 00:48:16 +0000 (0:00:00.123) 0:02:39.910 ******** 2026-04-11 00:49:56.907644 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 00:49:56.907650 | orchestrator | 2026-04-11 00:49:56.907657 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-11 00:49:56.907664 | orchestrator | Saturday 11 April 2026 00:48:16 +0000 (0:00:00.200) 0:02:40.111 ******** 2026-04-11 00:49:56.907670 | orchestrator | ok: [testbed-manager] 2026-04-11 00:49:56.907677 | orchestrator | 2026-04-11 00:49:56.907683 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-11 00:49:56.907690 | orchestrator | Saturday 11 April 2026 00:48:17 +0000 (0:00:00.680) 0:02:40.792 ******** 2026-04-11 00:49:56.907697 | orchestrator | ok: [testbed-manager] 2026-04-11 00:49:56.907703 | orchestrator | 2026-04-11 00:49:56.907710 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-11 00:49:56.907717 | orchestrator | Saturday 11 April 2026 00:48:18 +0000 (0:00:01.321) 0:02:42.114 ******** 2026-04-11 00:49:56.907723 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.907730 | orchestrator | 2026-04-11 00:49:56.907737 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-11 00:49:56.907743 | orchestrator | Saturday 11 April 2026 00:48:19 +0000 (0:00:00.866) 0:02:42.980 ******** 2026-04-11 00:49:56.907750 | orchestrator | ok: [testbed-manager] 2026-04-11 00:49:56.907757 | orchestrator | 2026-04-11 00:49:56.907775 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-11 00:49:56.907792 | orchestrator | Saturday 11 April 2026 00:48:20 +0000 (0:00:00.375) 0:02:43.356 ******** 2026-04-11 00:49:56.907804 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.907815 | orchestrator | 2026-04-11 00:49:56.907825 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-11 00:49:56.907836 | orchestrator | Saturday 11 April 2026 00:48:26 +0000 (0:00:06.198) 0:02:49.554 ******** 2026-04-11 00:49:56.907846 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.907857 | orchestrator | 2026-04-11 00:49:56.907869 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-11 00:49:56.907880 | orchestrator | Saturday 11 April 2026 00:48:37 +0000 (0:00:11.403) 0:03:00.958 ******** 2026-04-11 00:49:56.907891 | orchestrator | ok: [testbed-manager] 2026-04-11 00:49:56.907903 | orchestrator | 2026-04-11 00:49:56.907914 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-11 00:49:56.907927 | orchestrator | 2026-04-11 00:49:56.907938 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-11 00:49:56.907950 | orchestrator | Saturday 11 April 2026 00:48:38 +0000 (0:00:00.432) 0:03:01.390 ******** 2026-04-11 00:49:56.907961 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.907972 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.907984 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.907996 | orchestrator | 2026-04-11 00:49:56.908008 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-11 00:49:56.908019 | orchestrator | Saturday 11 April 2026 00:48:38 +0000 (0:00:00.328) 0:03:01.719 ******** 2026-04-11 00:49:56.908041 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.908053 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.908064 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.908075 | orchestrator | 2026-04-11 00:49:56.908086 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-11 00:49:56.908098 | orchestrator | Saturday 11 April 2026 00:48:38 +0000 (0:00:00.519) 0:03:02.239 ******** 2026-04-11 00:49:56.908110 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:49:56.908121 | orchestrator | 2026-04-11 00:49:56.908133 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-11 00:49:56.908145 | orchestrator | Saturday 11 April 2026 00:48:39 +0000 (0:00:00.707) 0:03:02.946 ******** 2026-04-11 00:49:56.908156 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 00:49:56.908167 | orchestrator | 2026-04-11 00:49:56.908179 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-11 00:49:56.908191 | orchestrator | Saturday 11 April 2026 00:48:40 +0000 (0:00:00.871) 0:03:03.817 ******** 2026-04-11 00:49:56.908201 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 00:49:56.908213 | orchestrator | 2026-04-11 00:49:56.908224 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-11 00:49:56.908235 | orchestrator | Saturday 11 April 2026 00:48:41 +0000 (0:00:00.918) 0:03:04.736 ******** 2026-04-11 00:49:56.908247 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.908257 | orchestrator | 2026-04-11 00:49:56.908268 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-11 00:49:56.908280 | orchestrator | Saturday 11 April 2026 00:48:41 +0000 (0:00:00.117) 0:03:04.853 ******** 2026-04-11 00:49:56.908292 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 00:49:56.908303 | orchestrator | 2026-04-11 00:49:56.908315 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-11 00:49:56.908326 | orchestrator | Saturday 11 April 2026 00:48:42 +0000 (0:00:00.949) 0:03:05.802 ******** 2026-04-11 00:49:56.908337 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.908348 | orchestrator | 2026-04-11 00:49:56.908360 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-11 00:49:56.908371 | orchestrator | Saturday 11 April 2026 00:48:42 +0000 (0:00:00.110) 0:03:05.912 ******** 2026-04-11 00:49:56.908382 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.908394 | orchestrator | 2026-04-11 00:49:56.908404 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-11 00:49:56.908415 | orchestrator | Saturday 11 April 2026 00:48:42 +0000 (0:00:00.249) 0:03:06.162 ******** 2026-04-11 00:49:56.908427 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.908437 | orchestrator | 2026-04-11 00:49:56.908449 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-11 00:49:56.908476 | orchestrator | Saturday 11 April 2026 00:48:43 +0000 (0:00:00.104) 0:03:06.267 ******** 2026-04-11 00:49:56.908490 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.908501 | orchestrator | 2026-04-11 00:49:56.908518 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-11 00:49:56.908529 | orchestrator | Saturday 11 April 2026 00:48:43 +0000 (0:00:00.097) 0:03:06.364 ******** 2026-04-11 00:49:56.908540 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 00:49:56.908551 | orchestrator | 2026-04-11 00:49:56.908562 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-11 00:49:56.908574 | orchestrator | Saturday 11 April 2026 00:48:46 +0000 (0:00:03.799) 0:03:10.164 ******** 2026-04-11 00:49:56.908585 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-11 00:49:56.908596 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-11 00:49:56.908606 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-11 00:49:56.908618 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-11 00:49:56.908625 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-11 00:49:56.908632 | orchestrator | 2026-04-11 00:49:56.908638 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-11 00:49:56.908645 | orchestrator | Saturday 11 April 2026 00:49:29 +0000 (0:00:42.474) 0:03:52.638 ******** 2026-04-11 00:49:56.908658 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 00:49:56.908664 | orchestrator | 2026-04-11 00:49:56.908671 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-11 00:49:56.908678 | orchestrator | Saturday 11 April 2026 00:49:30 +0000 (0:00:01.221) 0:03:53.860 ******** 2026-04-11 00:49:56.908684 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 00:49:56.908691 | orchestrator | 2026-04-11 00:49:56.908698 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-11 00:49:56.908704 | orchestrator | Saturday 11 April 2026 00:49:32 +0000 (0:00:01.645) 0:03:55.505 ******** 2026-04-11 00:49:56.908713 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 00:49:56.908727 | orchestrator | 2026-04-11 00:49:56.908743 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-11 00:49:56.908754 | orchestrator | Saturday 11 April 2026 00:49:33 +0000 (0:00:01.052) 0:03:56.558 ******** 2026-04-11 00:49:56.908764 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.908776 | orchestrator | 2026-04-11 00:49:56.908788 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-11 00:49:56.908801 | orchestrator | Saturday 11 April 2026 00:49:33 +0000 (0:00:00.118) 0:03:56.676 ******** 2026-04-11 00:49:56.908812 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-11 00:49:56.908822 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-11 00:49:56.908829 | orchestrator | 2026-04-11 00:49:56.908836 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-11 00:49:56.908843 | orchestrator | Saturday 11 April 2026 00:49:35 +0000 (0:00:01.921) 0:03:58.598 ******** 2026-04-11 00:49:56.908849 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.908856 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.908863 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.908869 | orchestrator | 2026-04-11 00:49:56.908876 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-11 00:49:56.908882 | orchestrator | Saturday 11 April 2026 00:49:35 +0000 (0:00:00.394) 0:03:58.992 ******** 2026-04-11 00:49:56.908889 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.908896 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.908902 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.908909 | orchestrator | 2026-04-11 00:49:56.908916 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-11 00:49:56.908922 | orchestrator | 2026-04-11 00:49:56.908929 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-11 00:49:56.908936 | orchestrator | Saturday 11 April 2026 00:49:36 +0000 (0:00:01.101) 0:04:00.094 ******** 2026-04-11 00:49:56.908942 | orchestrator | ok: [testbed-manager] 2026-04-11 00:49:56.908949 | orchestrator | 2026-04-11 00:49:56.908955 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-11 00:49:56.908962 | orchestrator | Saturday 11 April 2026 00:49:37 +0000 (0:00:00.152) 0:04:00.247 ******** 2026-04-11 00:49:56.908969 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 00:49:56.908975 | orchestrator | 2026-04-11 00:49:56.908982 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-11 00:49:56.908988 | orchestrator | Saturday 11 April 2026 00:49:37 +0000 (0:00:00.247) 0:04:00.495 ******** 2026-04-11 00:49:56.908995 | orchestrator | changed: [testbed-manager] 2026-04-11 00:49:56.909001 | orchestrator | 2026-04-11 00:49:56.909014 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-11 00:49:56.909020 | orchestrator | 2026-04-11 00:49:56.909027 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-11 00:49:56.909034 | orchestrator | Saturday 11 April 2026 00:49:42 +0000 (0:00:05.377) 0:04:05.872 ******** 2026-04-11 00:49:56.909040 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:49:56.909047 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:49:56.909053 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:49:56.909060 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:49:56.909067 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:49:56.909073 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:49:56.909080 | orchestrator | 2026-04-11 00:49:56.909087 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-11 00:49:56.909093 | orchestrator | Saturday 11 April 2026 00:49:43 +0000 (0:00:00.581) 0:04:06.453 ******** 2026-04-11 00:49:56.909100 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 00:49:56.909107 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 00:49:56.909117 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 00:49:56.909124 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 00:49:56.909131 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 00:49:56.909137 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 00:49:56.909143 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 00:49:56.909150 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 00:49:56.909157 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 00:49:56.909163 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 00:49:56.909170 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 00:49:56.909176 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 00:49:56.909188 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 00:49:56.909195 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 00:49:56.909202 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 00:49:56.909208 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 00:49:56.909215 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 00:49:56.909221 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 00:49:56.909228 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 00:49:56.909234 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 00:49:56.909241 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 00:49:56.909247 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 00:49:56.909254 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 00:49:56.909261 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 00:49:56.909267 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 00:49:56.909274 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 00:49:56.909284 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 00:49:56.909291 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 00:49:56.909298 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 00:49:56.909304 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 00:49:56.909311 | orchestrator | 2026-04-11 00:49:56.909318 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-11 00:49:56.909324 | orchestrator | Saturday 11 April 2026 00:49:54 +0000 (0:00:11.061) 0:04:17.514 ******** 2026-04-11 00:49:56.909331 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.909337 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.909344 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.909351 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.909357 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.909364 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.909371 | orchestrator | 2026-04-11 00:49:56.909377 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-11 00:49:56.909384 | orchestrator | Saturday 11 April 2026 00:49:54 +0000 (0:00:00.383) 0:04:17.897 ******** 2026-04-11 00:49:56.909390 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:49:56.909397 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:49:56.909404 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:49:56.909410 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:49:56.909417 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:49:56.909424 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:49:56.909430 | orchestrator | 2026-04-11 00:49:56.909437 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:49:56.909444 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:49:56.909451 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-11 00:49:56.909458 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 00:49:56.909510 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 00:49:56.909526 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 00:49:56.909538 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 00:49:56.909546 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 00:49:56.909553 | orchestrator | 2026-04-11 00:49:56.909560 | orchestrator | 2026-04-11 00:49:56.909567 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:49:56.909573 | orchestrator | Saturday 11 April 2026 00:49:55 +0000 (0:00:00.367) 0:04:18.265 ******** 2026-04-11 00:49:56.909580 | orchestrator | =============================================================================== 2026-04-11 00:49:56.909586 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.33s 2026-04-11 00:49:56.909593 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.47s 2026-04-11 00:49:56.909599 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.95s 2026-04-11 00:49:56.909609 | orchestrator | kubectl : Install required packages ------------------------------------ 11.40s 2026-04-11 00:49:56.909620 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.24s 2026-04-11 00:49:56.909627 | orchestrator | Manage labels ---------------------------------------------------------- 11.06s 2026-04-11 00:49:56.909633 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.50s 2026-04-11 00:49:56.909639 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.20s 2026-04-11 00:49:56.909645 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.38s 2026-04-11 00:49:56.909651 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 3.80s 2026-04-11 00:49:56.909658 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.61s 2026-04-11 00:49:56.909664 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.54s 2026-04-11 00:49:56.909670 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.94s 2026-04-11 00:49:56.909676 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.34s 2026-04-11 00:49:56.909682 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.33s 2026-04-11 00:49:56.909688 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.20s 2026-04-11 00:49:56.909695 | orchestrator | k3s_server : Kill the temporary service used for initialization --------- 2.09s 2026-04-11 00:49:56.909701 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.09s 2026-04-11 00:49:56.909707 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.09s 2026-04-11 00:49:56.909713 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.92s 2026-04-11 00:49:56.909719 | orchestrator | 2026-04-11 00:49:56 | INFO  | Task 409f7b84-00cd-4346-aebb-e75dd6a4c7d0 is in state STARTED 2026-04-11 00:49:56.909726 | orchestrator | 2026-04-11 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:49:59.942854 | orchestrator | 2026-04-11 00:49:59 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:49:59.945324 | orchestrator | 2026-04-11 00:49:59 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:49:59.948607 | orchestrator | 2026-04-11 00:49:59 | INFO  | Task 817ff524-dc9b-41d6-8067-e2b333f91c48 is in state STARTED 2026-04-11 00:49:59.949388 | orchestrator | 2026-04-11 00:49:59 | INFO  | Task 409f7b84-00cd-4346-aebb-e75dd6a4c7d0 is in state STARTED 2026-04-11 00:49:59.949703 | orchestrator | 2026-04-11 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:02.986235 | orchestrator | 2026-04-11 00:50:02 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:02.987061 | orchestrator | 2026-04-11 00:50:02 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:02.989841 | orchestrator | 2026-04-11 00:50:02 | INFO  | Task 817ff524-dc9b-41d6-8067-e2b333f91c48 is in state STARTED 2026-04-11 00:50:02.990414 | orchestrator | 2026-04-11 00:50:02 | INFO  | Task 409f7b84-00cd-4346-aebb-e75dd6a4c7d0 is in state SUCCESS 2026-04-11 00:50:02.990446 | orchestrator | 2026-04-11 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:06.034597 | orchestrator | 2026-04-11 00:50:06 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:06.037750 | orchestrator | 2026-04-11 00:50:06 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:06.039516 | orchestrator | 2026-04-11 00:50:06 | INFO  | Task 817ff524-dc9b-41d6-8067-e2b333f91c48 is in state SUCCESS 2026-04-11 00:50:06.039674 | orchestrator | 2026-04-11 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:09.075779 | orchestrator | 2026-04-11 00:50:09 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:09.077656 | orchestrator | 2026-04-11 00:50:09 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:09.079517 | orchestrator | 2026-04-11 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:12.113712 | orchestrator | 2026-04-11 00:50:12 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:12.114715 | orchestrator | 2026-04-11 00:50:12 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:12.114784 | orchestrator | 2026-04-11 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:15.151747 | orchestrator | 2026-04-11 00:50:15 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:15.151828 | orchestrator | 2026-04-11 00:50:15 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:15.151835 | orchestrator | 2026-04-11 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:18.197865 | orchestrator | 2026-04-11 00:50:18 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:18.199083 | orchestrator | 2026-04-11 00:50:18 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:18.199349 | orchestrator | 2026-04-11 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:21.242382 | orchestrator | 2026-04-11 00:50:21 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:21.244901 | orchestrator | 2026-04-11 00:50:21 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:21.244977 | orchestrator | 2026-04-11 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:24.280050 | orchestrator | 2026-04-11 00:50:24 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:24.280134 | orchestrator | 2026-04-11 00:50:24 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:24.280143 | orchestrator | 2026-04-11 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:27.304701 | orchestrator | 2026-04-11 00:50:27 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:27.304873 | orchestrator | 2026-04-11 00:50:27 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:27.304888 | orchestrator | 2026-04-11 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:30.336903 | orchestrator | 2026-04-11 00:50:30 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:30.337580 | orchestrator | 2026-04-11 00:50:30 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:30.337613 | orchestrator | 2026-04-11 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:33.365475 | orchestrator | 2026-04-11 00:50:33 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:33.366278 | orchestrator | 2026-04-11 00:50:33 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:33.366577 | orchestrator | 2026-04-11 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:36.408774 | orchestrator | 2026-04-11 00:50:36 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:36.409020 | orchestrator | 2026-04-11 00:50:36 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:36.409047 | orchestrator | 2026-04-11 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:39.437839 | orchestrator | 2026-04-11 00:50:39 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:39.439747 | orchestrator | 2026-04-11 00:50:39 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:39.439826 | orchestrator | 2026-04-11 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:42.472922 | orchestrator | 2026-04-11 00:50:42 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:42.473268 | orchestrator | 2026-04-11 00:50:42 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:42.473292 | orchestrator | 2026-04-11 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:45.509502 | orchestrator | 2026-04-11 00:50:45 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:45.510307 | orchestrator | 2026-04-11 00:50:45 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:45.510348 | orchestrator | 2026-04-11 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:48.546958 | orchestrator | 2026-04-11 00:50:48 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:48.548127 | orchestrator | 2026-04-11 00:50:48 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:48.548195 | orchestrator | 2026-04-11 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:51.580746 | orchestrator | 2026-04-11 00:50:51 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:51.580839 | orchestrator | 2026-04-11 00:50:51 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:51.580850 | orchestrator | 2026-04-11 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:54.613184 | orchestrator | 2026-04-11 00:50:54 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:54.613903 | orchestrator | 2026-04-11 00:50:54 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:54.613959 | orchestrator | 2026-04-11 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:50:57.651264 | orchestrator | 2026-04-11 00:50:57 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:50:57.653923 | orchestrator | 2026-04-11 00:50:57 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:50:57.654267 | orchestrator | 2026-04-11 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:00.701660 | orchestrator | 2026-04-11 00:51:00 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:00.704577 | orchestrator | 2026-04-11 00:51:00 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:00.704646 | orchestrator | 2026-04-11 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:03.748044 | orchestrator | 2026-04-11 00:51:03 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:03.749978 | orchestrator | 2026-04-11 00:51:03 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:03.750403 | orchestrator | 2026-04-11 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:06.795216 | orchestrator | 2026-04-11 00:51:06 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:06.798373 | orchestrator | 2026-04-11 00:51:06 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:06.798466 | orchestrator | 2026-04-11 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:09.841614 | orchestrator | 2026-04-11 00:51:09 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:09.842768 | orchestrator | 2026-04-11 00:51:09 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:09.842820 | orchestrator | 2026-04-11 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:12.879707 | orchestrator | 2026-04-11 00:51:12 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:12.881406 | orchestrator | 2026-04-11 00:51:12 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:12.881535 | orchestrator | 2026-04-11 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:15.914613 | orchestrator | 2026-04-11 00:51:15 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:15.916446 | orchestrator | 2026-04-11 00:51:15 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:15.916505 | orchestrator | 2026-04-11 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:18.953644 | orchestrator | 2026-04-11 00:51:18 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:18.954620 | orchestrator | 2026-04-11 00:51:18 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:18.954651 | orchestrator | 2026-04-11 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:21.998721 | orchestrator | 2026-04-11 00:51:21 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:21.998854 | orchestrator | 2026-04-11 00:51:21 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:21.998865 | orchestrator | 2026-04-11 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:25.035872 | orchestrator | 2026-04-11 00:51:25 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:25.038341 | orchestrator | 2026-04-11 00:51:25 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:25.038398 | orchestrator | 2026-04-11 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:28.070197 | orchestrator | 2026-04-11 00:51:28 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:28.071739 | orchestrator | 2026-04-11 00:51:28 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:28.071802 | orchestrator | 2026-04-11 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:31.110161 | orchestrator | 2026-04-11 00:51:31 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:31.110558 | orchestrator | 2026-04-11 00:51:31 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:31.111434 | orchestrator | 2026-04-11 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:34.150703 | orchestrator | 2026-04-11 00:51:34 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:34.150797 | orchestrator | 2026-04-11 00:51:34 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:34.150818 | orchestrator | 2026-04-11 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:37.198512 | orchestrator | 2026-04-11 00:51:37 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:37.200516 | orchestrator | 2026-04-11 00:51:37 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:37.200576 | orchestrator | 2026-04-11 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:40.247144 | orchestrator | 2026-04-11 00:51:40 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:40.248476 | orchestrator | 2026-04-11 00:51:40 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:40.248534 | orchestrator | 2026-04-11 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:43.290757 | orchestrator | 2026-04-11 00:51:43 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:43.292551 | orchestrator | 2026-04-11 00:51:43 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:43.292725 | orchestrator | 2026-04-11 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:46.318680 | orchestrator | 2026-04-11 00:51:46 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:46.320604 | orchestrator | 2026-04-11 00:51:46 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:46.320673 | orchestrator | 2026-04-11 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:49.366616 | orchestrator | 2026-04-11 00:51:49 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:49.368544 | orchestrator | 2026-04-11 00:51:49 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:49.368855 | orchestrator | 2026-04-11 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:52.401020 | orchestrator | 2026-04-11 00:51:52 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:52.402506 | orchestrator | 2026-04-11 00:51:52 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:52.403128 | orchestrator | 2026-04-11 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:55.442320 | orchestrator | 2026-04-11 00:51:55 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:55.444000 | orchestrator | 2026-04-11 00:51:55 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:55.444044 | orchestrator | 2026-04-11 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:51:58.489473 | orchestrator | 2026-04-11 00:51:58 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:51:58.491661 | orchestrator | 2026-04-11 00:51:58 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:51:58.491715 | orchestrator | 2026-04-11 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:01.530369 | orchestrator | 2026-04-11 00:52:01 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:52:01.530740 | orchestrator | 2026-04-11 00:52:01 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:01.530762 | orchestrator | 2026-04-11 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:04.572603 | orchestrator | 2026-04-11 00:52:04 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:52:04.575429 | orchestrator | 2026-04-11 00:52:04 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:04.575943 | orchestrator | 2026-04-11 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:07.627670 | orchestrator | 2026-04-11 00:52:07 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state STARTED 2026-04-11 00:52:07.629347 | orchestrator | 2026-04-11 00:52:07 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:07.629485 | orchestrator | 2026-04-11 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:10.673190 | orchestrator | 2026-04-11 00:52:10 | INFO  | Task cd696143-12f9-4497-9fcd-f60d36be5326 is in state SUCCESS 2026-04-11 00:52:10.675390 | orchestrator | 2026-04-11 00:52:10.675452 | orchestrator | 2026-04-11 00:52:10.675460 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-11 00:52:10.675468 | orchestrator | 2026-04-11 00:52:10.675475 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-11 00:52:10.675482 | orchestrator | Saturday 11 April 2026 00:49:57 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-04-11 00:52:10.675490 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-11 00:52:10.675497 | orchestrator | 2026-04-11 00:52:10.675504 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-11 00:52:10.675510 | orchestrator | Saturday 11 April 2026 00:49:58 +0000 (0:00:01.070) 0:00:01.236 ******** 2026-04-11 00:52:10.675517 | orchestrator | changed: [testbed-manager] 2026-04-11 00:52:10.675524 | orchestrator | 2026-04-11 00:52:10.675530 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-11 00:52:10.675609 | orchestrator | Saturday 11 April 2026 00:49:59 +0000 (0:00:01.268) 0:00:02.504 ******** 2026-04-11 00:52:10.675616 | orchestrator | changed: [testbed-manager] 2026-04-11 00:52:10.675621 | orchestrator | 2026-04-11 00:52:10.675653 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:52:10.675661 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:52:10.675669 | orchestrator | 2026-04-11 00:52:10.675675 | orchestrator | 2026-04-11 00:52:10.675681 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:52:10.675687 | orchestrator | Saturday 11 April 2026 00:50:00 +0000 (0:00:00.407) 0:00:02.912 ******** 2026-04-11 00:52:10.675693 | orchestrator | =============================================================================== 2026-04-11 00:52:10.675699 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.27s 2026-04-11 00:52:10.675706 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.07s 2026-04-11 00:52:10.675712 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.41s 2026-04-11 00:52:10.675719 | orchestrator | 2026-04-11 00:52:10.675725 | orchestrator | 2026-04-11 00:52:10.675732 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-11 00:52:10.675738 | orchestrator | 2026-04-11 00:52:10.675745 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-11 00:52:10.675752 | orchestrator | Saturday 11 April 2026 00:49:58 +0000 (0:00:00.198) 0:00:00.198 ******** 2026-04-11 00:52:10.675759 | orchestrator | ok: [testbed-manager] 2026-04-11 00:52:10.675767 | orchestrator | 2026-04-11 00:52:10.675951 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-11 00:52:10.675958 | orchestrator | Saturday 11 April 2026 00:49:59 +0000 (0:00:00.799) 0:00:00.997 ******** 2026-04-11 00:52:10.675962 | orchestrator | ok: [testbed-manager] 2026-04-11 00:52:10.675966 | orchestrator | 2026-04-11 00:52:10.675970 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-11 00:52:10.675975 | orchestrator | Saturday 11 April 2026 00:49:59 +0000 (0:00:00.499) 0:00:01.497 ******** 2026-04-11 00:52:10.675979 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-11 00:52:10.675983 | orchestrator | 2026-04-11 00:52:10.675987 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-11 00:52:10.676007 | orchestrator | Saturday 11 April 2026 00:50:00 +0000 (0:00:01.031) 0:00:02.528 ******** 2026-04-11 00:52:10.676011 | orchestrator | changed: [testbed-manager] 2026-04-11 00:52:10.676015 | orchestrator | 2026-04-11 00:52:10.676019 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-11 00:52:10.676023 | orchestrator | Saturday 11 April 2026 00:50:01 +0000 (0:00:00.997) 0:00:03.525 ******** 2026-04-11 00:52:10.676027 | orchestrator | changed: [testbed-manager] 2026-04-11 00:52:10.676031 | orchestrator | 2026-04-11 00:52:10.676035 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-11 00:52:10.676039 | orchestrator | Saturday 11 April 2026 00:50:02 +0000 (0:00:00.482) 0:00:04.008 ******** 2026-04-11 00:52:10.676043 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 00:52:10.676047 | orchestrator | 2026-04-11 00:52:10.676059 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-11 00:52:10.676063 | orchestrator | Saturday 11 April 2026 00:50:03 +0000 (0:00:01.509) 0:00:05.518 ******** 2026-04-11 00:52:10.676066 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 00:52:10.676070 | orchestrator | 2026-04-11 00:52:10.676076 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-11 00:52:10.676082 | orchestrator | Saturday 11 April 2026 00:50:04 +0000 (0:00:00.798) 0:00:06.316 ******** 2026-04-11 00:52:10.676089 | orchestrator | ok: [testbed-manager] 2026-04-11 00:52:10.676095 | orchestrator | 2026-04-11 00:52:10.676101 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-11 00:52:10.676107 | orchestrator | Saturday 11 April 2026 00:50:04 +0000 (0:00:00.371) 0:00:06.687 ******** 2026-04-11 00:52:10.676113 | orchestrator | ok: [testbed-manager] 2026-04-11 00:52:10.676119 | orchestrator | 2026-04-11 00:52:10.676125 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:52:10.676131 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:52:10.676137 | orchestrator | 2026-04-11 00:52:10.676143 | orchestrator | 2026-04-11 00:52:10.676285 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:52:10.676290 | orchestrator | Saturday 11 April 2026 00:50:04 +0000 (0:00:00.270) 0:00:06.958 ******** 2026-04-11 00:52:10.676294 | orchestrator | =============================================================================== 2026-04-11 00:52:10.676298 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2026-04-11 00:52:10.676302 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.03s 2026-04-11 00:52:10.676306 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.00s 2026-04-11 00:52:10.676328 | orchestrator | Get home directory of operator user ------------------------------------- 0.80s 2026-04-11 00:52:10.676332 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.80s 2026-04-11 00:52:10.676336 | orchestrator | Create .kube directory -------------------------------------------------- 0.50s 2026-04-11 00:52:10.676340 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.48s 2026-04-11 00:52:10.676344 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2026-04-11 00:52:10.676347 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-04-11 00:52:10.676351 | orchestrator | 2026-04-11 00:52:10.676355 | orchestrator | 2026-04-11 00:52:10.676359 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:52:10.676363 | orchestrator | 2026-04-11 00:52:10.676367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:52:10.676371 | orchestrator | Saturday 11 April 2026 00:46:55 +0000 (0:00:00.448) 0:00:00.448 ******** 2026-04-11 00:52:10.676374 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.676378 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.676382 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.676393 | orchestrator | 2026-04-11 00:52:10.676397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:52:10.676400 | orchestrator | Saturday 11 April 2026 00:46:55 +0000 (0:00:00.427) 0:00:00.875 ******** 2026-04-11 00:52:10.677105 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-11 00:52:10.677113 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-11 00:52:10.677181 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-11 00:52:10.677185 | orchestrator | 2026-04-11 00:52:10.677189 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-11 00:52:10.677193 | orchestrator | 2026-04-11 00:52:10.677197 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-11 00:52:10.677226 | orchestrator | Saturday 11 April 2026 00:46:56 +0000 (0:00:00.361) 0:00:01.237 ******** 2026-04-11 00:52:10.677230 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.677234 | orchestrator | 2026-04-11 00:52:10.677238 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-11 00:52:10.677242 | orchestrator | Saturday 11 April 2026 00:46:57 +0000 (0:00:00.831) 0:00:02.068 ******** 2026-04-11 00:52:10.677246 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.677250 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.677254 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.677258 | orchestrator | 2026-04-11 00:52:10.677262 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-11 00:52:10.677265 | orchestrator | Saturday 11 April 2026 00:46:58 +0000 (0:00:01.413) 0:00:03.482 ******** 2026-04-11 00:52:10.677269 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.677273 | orchestrator | 2026-04-11 00:52:10.677277 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-11 00:52:10.677281 | orchestrator | Saturday 11 April 2026 00:46:59 +0000 (0:00:01.094) 0:00:04.577 ******** 2026-04-11 00:52:10.677285 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.677289 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.677293 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.677296 | orchestrator | 2026-04-11 00:52:10.677300 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-11 00:52:10.677304 | orchestrator | Saturday 11 April 2026 00:47:00 +0000 (0:00:01.048) 0:00:05.625 ******** 2026-04-11 00:52:10.677308 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 00:52:10.677358 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 00:52:10.677363 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 00:52:10.677373 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 00:52:10.677377 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 00:52:10.677382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 00:52:10.677385 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 00:52:10.677389 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 00:52:10.677393 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 00:52:10.677397 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 00:52:10.677401 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 00:52:10.677404 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 00:52:10.677419 | orchestrator | 2026-04-11 00:52:10.677423 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 00:52:10.677427 | orchestrator | Saturday 11 April 2026 00:47:04 +0000 (0:00:04.134) 0:00:09.760 ******** 2026-04-11 00:52:10.677431 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-11 00:52:10.677435 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-11 00:52:10.677439 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-11 00:52:10.677443 | orchestrator | 2026-04-11 00:52:10.677446 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 00:52:10.677468 | orchestrator | Saturday 11 April 2026 00:47:05 +0000 (0:00:00.787) 0:00:10.548 ******** 2026-04-11 00:52:10.677472 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-11 00:52:10.677476 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-11 00:52:10.677480 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-11 00:52:10.677484 | orchestrator | 2026-04-11 00:52:10.677488 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 00:52:10.677491 | orchestrator | Saturday 11 April 2026 00:47:07 +0000 (0:00:01.853) 0:00:12.402 ******** 2026-04-11 00:52:10.677495 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-11 00:52:10.677499 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.677503 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-11 00:52:10.677507 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.677511 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-11 00:52:10.677514 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.677518 | orchestrator | 2026-04-11 00:52:10.677522 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-11 00:52:10.677526 | orchestrator | Saturday 11 April 2026 00:47:08 +0000 (0:00:00.712) 0:00:13.114 ******** 2026-04-11 00:52:10.677532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.677541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.677545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.677552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.677560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.677575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.677580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.677585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.677589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.677593 | orchestrator | 2026-04-11 00:52:10.677597 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-11 00:52:10.677601 | orchestrator | Saturday 11 April 2026 00:47:10 +0000 (0:00:01.902) 0:00:15.017 ******** 2026-04-11 00:52:10.677605 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.677609 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.677613 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.677622 | orchestrator | 2026-04-11 00:52:10.677626 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-11 00:52:10.677629 | orchestrator | Saturday 11 April 2026 00:47:11 +0000 (0:00:01.360) 0:00:16.377 ******** 2026-04-11 00:52:10.678287 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-11 00:52:10.678333 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-11 00:52:10.678350 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-11 00:52:10.678354 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-11 00:52:10.678358 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-11 00:52:10.678362 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-11 00:52:10.678366 | orchestrator | 2026-04-11 00:52:10.678370 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-11 00:52:10.678374 | orchestrator | Saturday 11 April 2026 00:47:14 +0000 (0:00:02.908) 0:00:19.286 ******** 2026-04-11 00:52:10.678378 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.678383 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.678386 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.678390 | orchestrator | 2026-04-11 00:52:10.678394 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-11 00:52:10.678398 | orchestrator | Saturday 11 April 2026 00:47:15 +0000 (0:00:01.384) 0:00:20.670 ******** 2026-04-11 00:52:10.678402 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.678406 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.678410 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.678414 | orchestrator | 2026-04-11 00:52:10.678417 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-11 00:52:10.678421 | orchestrator | Saturday 11 April 2026 00:47:17 +0000 (0:00:01.917) 0:00:22.588 ******** 2026-04-11 00:52:10.678439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.678445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.678450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.678456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 00:52:10.678472 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.678479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.678485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.678496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.678502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.678509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.678515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 00:52:10.678527 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.678534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.678541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 00:52:10.678547 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.678553 | orchestrator | 2026-04-11 00:52:10.678560 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-11 00:52:10.678566 | orchestrator | Saturday 11 April 2026 00:47:18 +0000 (0:00:00.789) 0:00:23.378 ******** 2026-04-11 00:52:10.678573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.678653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 00:52:10.678660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.678679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 00:52:10.678688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.678704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd', '__omit_place_holder__3f4fa71925f1f886f3975dbf4205704b87b762fd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 00:52:10.678711 | orchestrator | 2026-04-11 00:52:10.678717 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-11 00:52:10.678724 | orchestrator | Saturday 11 April 2026 00:47:21 +0000 (0:00:03.345) 0:00:26.724 ******** 2026-04-11 00:52:10.678731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.678784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.678791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.678802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.678806 | orchestrator | 2026-04-11 00:52:10.678810 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-11 00:52:10.678822 | orchestrator | Saturday 11 April 2026 00:47:25 +0000 (0:00:03.200) 0:00:29.924 ******** 2026-04-11 00:52:10.678829 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 00:52:10.678836 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 00:52:10.678843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 00:52:10.678850 | orchestrator | 2026-04-11 00:52:10.678856 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-11 00:52:10.678862 | orchestrator | Saturday 11 April 2026 00:47:26 +0000 (0:00:01.767) 0:00:31.692 ******** 2026-04-11 00:52:10.678868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 00:52:10.678875 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 00:52:10.678881 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 00:52:10.678887 | orchestrator | 2026-04-11 00:52:10.678893 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-11 00:52:10.678900 | orchestrator | Saturday 11 April 2026 00:47:29 +0000 (0:00:03.002) 0:00:34.694 ******** 2026-04-11 00:52:10.678906 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.678913 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.678919 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.678925 | orchestrator | 2026-04-11 00:52:10.678931 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-11 00:52:10.678937 | orchestrator | Saturday 11 April 2026 00:47:30 +0000 (0:00:00.628) 0:00:35.322 ******** 2026-04-11 00:52:10.678943 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 00:52:10.678951 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 00:52:10.678958 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 00:52:10.678964 | orchestrator | 2026-04-11 00:52:10.678971 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-11 00:52:10.678980 | orchestrator | Saturday 11 April 2026 00:47:32 +0000 (0:00:02.542) 0:00:37.865 ******** 2026-04-11 00:52:10.678986 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 00:52:10.678992 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 00:52:10.678998 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 00:52:10.679004 | orchestrator | 2026-04-11 00:52:10.679014 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-11 00:52:10.679020 | orchestrator | Saturday 11 April 2026 00:47:34 +0000 (0:00:01.942) 0:00:39.808 ******** 2026-04-11 00:52:10.679026 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.679032 | orchestrator | 2026-04-11 00:52:10.679037 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-11 00:52:10.679043 | orchestrator | Saturday 11 April 2026 00:47:35 +0000 (0:00:00.491) 0:00:40.300 ******** 2026-04-11 00:52:10.679049 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-11 00:52:10.679055 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-11 00:52:10.679061 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-11 00:52:10.679067 | orchestrator | 2026-04-11 00:52:10.679073 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-11 00:52:10.679085 | orchestrator | Saturday 11 April 2026 00:47:37 +0000 (0:00:01.969) 0:00:42.269 ******** 2026-04-11 00:52:10.679091 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-11 00:52:10.679098 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-11 00:52:10.679102 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-11 00:52:10.679106 | orchestrator | 2026-04-11 00:52:10.679110 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-11 00:52:10.679114 | orchestrator | Saturday 11 April 2026 00:47:38 +0000 (0:00:01.528) 0:00:43.798 ******** 2026-04-11 00:52:10.679117 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.679121 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.679125 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.679129 | orchestrator | 2026-04-11 00:52:10.679138 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-11 00:52:10.679142 | orchestrator | Saturday 11 April 2026 00:47:39 +0000 (0:00:00.237) 0:00:44.035 ******** 2026-04-11 00:52:10.679145 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.679149 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.679153 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.679157 | orchestrator | 2026-04-11 00:52:10.679160 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-11 00:52:10.679166 | orchestrator | Saturday 11 April 2026 00:47:39 +0000 (0:00:00.251) 0:00:44.287 ******** 2026-04-11 00:52:10.679175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.679257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.679261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.679265 | orchestrator | 2026-04-11 00:52:10.679269 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-11 00:52:10.679273 | orchestrator | Saturday 11 April 2026 00:47:42 +0000 (0:00:03.480) 0:00:47.767 ******** 2026-04-11 00:52:10.679279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679309 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.679320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679341 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.679348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679377 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.679383 | orchestrator | 2026-04-11 00:52:10.679389 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-11 00:52:10.679395 | orchestrator | Saturday 11 April 2026 00:47:43 +0000 (0:00:00.571) 0:00:48.338 ******** 2026-04-11 00:52:10.679407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679420 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.679424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679449 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.679456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679485 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.679492 | orchestrator | 2026-04-11 00:52:10.679498 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-11 00:52:10.679504 | orchestrator | Saturday 11 April 2026 00:47:44 +0000 (0:00:00.790) 0:00:49.129 ******** 2026-04-11 00:52:10.679509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 00:52:10.679516 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 00:52:10.679522 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 00:52:10.679534 | orchestrator | 2026-04-11 00:52:10.679540 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-11 00:52:10.679546 | orchestrator | Saturday 11 April 2026 00:47:45 +0000 (0:00:01.500) 0:00:50.630 ******** 2026-04-11 00:52:10.679552 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 00:52:10.679558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 00:52:10.679564 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 00:52:10.679570 | orchestrator | 2026-04-11 00:52:10.679576 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-11 00:52:10.679580 | orchestrator | Saturday 11 April 2026 00:47:47 +0000 (0:00:01.479) 0:00:52.109 ******** 2026-04-11 00:52:10.679584 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 00:52:10.679588 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 00:52:10.679592 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 00:52:10.679596 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 00:52:10.679603 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.679607 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 00:52:10.679611 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.679615 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 00:52:10.679619 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.679623 | orchestrator | 2026-04-11 00:52:10.679626 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-11 00:52:10.679630 | orchestrator | Saturday 11 April 2026 00:47:47 +0000 (0:00:00.701) 0:00:52.811 ******** 2026-04-11 00:52:10.679634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.679672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.679681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.679685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.679689 | orchestrator | 2026-04-11 00:52:10.679693 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-11 00:52:10.679701 | orchestrator | Saturday 11 April 2026 00:47:50 +0000 (0:00:02.530) 0:00:55.341 ******** 2026-04-11 00:52:10.679705 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:52:10.679710 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:10.679713 | orchestrator | } 2026-04-11 00:52:10.679718 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:52:10.679722 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:10.679725 | orchestrator | } 2026-04-11 00:52:10.679729 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:52:10.679733 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:10.679737 | orchestrator | } 2026-04-11 00:52:10.679741 | orchestrator | 2026-04-11 00:52:10.679744 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:52:10.679748 | orchestrator | Saturday 11 April 2026 00:47:50 +0000 (0:00:00.296) 0:00:55.637 ******** 2026-04-11 00:52:10.679752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679768 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.679772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679793 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.679797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.679801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.679805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.679814 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.679818 | orchestrator | 2026-04-11 00:52:10.679822 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-11 00:52:10.679826 | orchestrator | Saturday 11 April 2026 00:47:51 +0000 (0:00:01.064) 0:00:56.701 ******** 2026-04-11 00:52:10.679830 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.679834 | orchestrator | 2026-04-11 00:52:10.679838 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-11 00:52:10.679843 | orchestrator | Saturday 11 April 2026 00:47:52 +0000 (0:00:00.616) 0:00:57.318 ******** 2026-04-11 00:52:10.679852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.679862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.679866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.679871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.679879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.679883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.679890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.679898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.679902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.679907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.679916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.679925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.679938 | orchestrator | 2026-04-11 00:52:10.679944 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-11 00:52:10.679950 | orchestrator | Saturday 11 April 2026 00:47:55 +0000 (0:00:03.195) 0:01:00.514 ******** 2026-04-11 00:52:10.679961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.679968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.679975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.679981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.679988 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.679999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.680012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.680024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680032 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.680036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.680041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.680048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680060 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.680065 | orchestrator | 2026-04-11 00:52:10.680506 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-11 00:52:10.680530 | orchestrator | Saturday 11 April 2026 00:47:56 +0000 (0:00:00.624) 0:01:01.138 ******** 2026-04-11 00:52:10.680536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680549 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.680553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680561 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.680565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680573 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.680577 | orchestrator | 2026-04-11 00:52:10.680581 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-11 00:52:10.680586 | orchestrator | Saturday 11 April 2026 00:47:57 +0000 (0:00:00.854) 0:01:01.993 ******** 2026-04-11 00:52:10.680589 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.680593 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.680597 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.680601 | orchestrator | 2026-04-11 00:52:10.680605 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-11 00:52:10.680609 | orchestrator | Saturday 11 April 2026 00:47:58 +0000 (0:00:01.191) 0:01:03.184 ******** 2026-04-11 00:52:10.680613 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.680616 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.680620 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.680624 | orchestrator | 2026-04-11 00:52:10.680637 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-11 00:52:10.680640 | orchestrator | Saturday 11 April 2026 00:48:00 +0000 (0:00:01.792) 0:01:04.976 ******** 2026-04-11 00:52:10.680644 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.680648 | orchestrator | 2026-04-11 00:52:10.680652 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-11 00:52:10.680656 | orchestrator | Saturday 11 April 2026 00:48:00 +0000 (0:00:00.577) 0:01:05.554 ******** 2026-04-11 00:52:10.680666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.680677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.680696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.680716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680724 | orchestrator | 2026-04-11 00:52:10.680728 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-11 00:52:10.680732 | orchestrator | Saturday 11 April 2026 00:48:03 +0000 (0:00:03.034) 0:01:08.588 ******** 2026-04-11 00:52:10.680752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.680757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680769 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.680773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.680778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680807 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.680814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.680821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.680829 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.680833 | orchestrator | 2026-04-11 00:52:10.680837 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-11 00:52:10.680841 | orchestrator | Saturday 11 April 2026 00:48:04 +0000 (0:00:00.695) 0:01:09.284 ******** 2026-04-11 00:52:10.680845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680857 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.680862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680870 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.680874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.680882 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.680886 | orchestrator | 2026-04-11 00:52:10.680890 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-11 00:52:10.680896 | orchestrator | Saturday 11 April 2026 00:48:05 +0000 (0:00:00.732) 0:01:10.017 ******** 2026-04-11 00:52:10.680900 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.680904 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.680908 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.680912 | orchestrator | 2026-04-11 00:52:10.680915 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-11 00:52:10.680919 | orchestrator | Saturday 11 April 2026 00:48:06 +0000 (0:00:01.203) 0:01:11.220 ******** 2026-04-11 00:52:10.680923 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.680927 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.680931 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.680934 | orchestrator | 2026-04-11 00:52:10.680938 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-11 00:52:10.680942 | orchestrator | Saturday 11 April 2026 00:48:08 +0000 (0:00:01.846) 0:01:13.067 ******** 2026-04-11 00:52:10.680946 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.680950 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.680954 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.680957 | orchestrator | 2026-04-11 00:52:10.680961 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-11 00:52:10.680965 | orchestrator | Saturday 11 April 2026 00:48:08 +0000 (0:00:00.359) 0:01:13.427 ******** 2026-04-11 00:52:10.680969 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.680973 | orchestrator | 2026-04-11 00:52:10.680976 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-11 00:52:10.680980 | orchestrator | Saturday 11 April 2026 00:48:09 +0000 (0:00:00.594) 0:01:14.022 ******** 2026-04-11 00:52:10.680987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 00:52:10.680996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 00:52:10.681000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 00:52:10.681004 | orchestrator | 2026-04-11 00:52:10.681008 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-11 00:52:10.681012 | orchestrator | Saturday 11 April 2026 00:48:11 +0000 (0:00:02.460) 0:01:16.482 ******** 2026-04-11 00:52:10.681018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 00:52:10.681022 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.681076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 00:52:10.681086 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.681090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 00:52:10.681095 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.681098 | orchestrator | 2026-04-11 00:52:10.681102 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-11 00:52:10.681106 | orchestrator | Saturday 11 April 2026 00:48:13 +0000 (0:00:01.984) 0:01:18.467 ******** 2026-04-11 00:52:10.681111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 00:52:10.681116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 00:52:10.681122 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.681126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 00:52:10.681133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 00:52:10.681137 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.681141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 00:52:10.681145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 00:52:10.681152 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.681156 | orchestrator | 2026-04-11 00:52:10.681160 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-11 00:52:10.681164 | orchestrator | Saturday 11 April 2026 00:48:15 +0000 (0:00:02.299) 0:01:20.767 ******** 2026-04-11 00:52:10.681168 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.681172 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.681175 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.681179 | orchestrator | 2026-04-11 00:52:10.681382 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-11 00:52:10.681394 | orchestrator | Saturday 11 April 2026 00:48:16 +0000 (0:00:00.390) 0:01:21.158 ******** 2026-04-11 00:52:10.681398 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.681402 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.681406 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.681410 | orchestrator | 2026-04-11 00:52:10.681414 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-11 00:52:10.681418 | orchestrator | Saturday 11 April 2026 00:48:17 +0000 (0:00:01.006) 0:01:22.164 ******** 2026-04-11 00:52:10.681421 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.681425 | orchestrator | 2026-04-11 00:52:10.681429 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-11 00:52:10.681433 | orchestrator | Saturday 11 April 2026 00:48:17 +0000 (0:00:00.744) 0:01:22.908 ******** 2026-04-11 00:52:10.681438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.681443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.681517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.681561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681575 | orchestrator | 2026-04-11 00:52:10.681579 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-11 00:52:10.681583 | orchestrator | Saturday 11 April 2026 00:48:21 +0000 (0:00:03.825) 0:01:26.734 ******** 2026-04-11 00:52:10.681591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.681601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681644 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.681649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.681656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681703 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.681708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.681712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.681732 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.681736 | orchestrator | 2026-04-11 00:52:10.681742 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-11 00:52:10.681749 | orchestrator | Saturday 11 April 2026 00:48:22 +0000 (0:00:00.720) 0:01:27.455 ******** 2026-04-11 00:52:10.681756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.681763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.681814 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.681824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.681831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.681837 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.681843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.681849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.681855 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.681861 | orchestrator | 2026-04-11 00:52:10.681867 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-11 00:52:10.681872 | orchestrator | Saturday 11 April 2026 00:48:23 +0000 (0:00:00.994) 0:01:28.449 ******** 2026-04-11 00:52:10.681879 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.681885 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.681892 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.681897 | orchestrator | 2026-04-11 00:52:10.681916 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-11 00:52:10.681923 | orchestrator | Saturday 11 April 2026 00:48:24 +0000 (0:00:01.082) 0:01:29.532 ******** 2026-04-11 00:52:10.681935 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.681941 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.681954 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.681960 | orchestrator | 2026-04-11 00:52:10.681968 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-11 00:52:10.681972 | orchestrator | Saturday 11 April 2026 00:48:26 +0000 (0:00:01.703) 0:01:31.236 ******** 2026-04-11 00:52:10.681976 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.681980 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.681984 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.681988 | orchestrator | 2026-04-11 00:52:10.681991 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-11 00:52:10.681995 | orchestrator | Saturday 11 April 2026 00:48:26 +0000 (0:00:00.254) 0:01:31.491 ******** 2026-04-11 00:52:10.681999 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.682003 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.682007 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.682010 | orchestrator | 2026-04-11 00:52:10.682043 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-11 00:52:10.682047 | orchestrator | Saturday 11 April 2026 00:48:26 +0000 (0:00:00.307) 0:01:31.798 ******** 2026-04-11 00:52:10.682052 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.682055 | orchestrator | 2026-04-11 00:52:10.682060 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-11 00:52:10.682063 | orchestrator | Saturday 11 April 2026 00:48:27 +0000 (0:00:00.864) 0:01:32.663 ******** 2026-04-11 00:52:10.682075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.682091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 00:52:10.682102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.682159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 00:52:10.682171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.682178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 00:52:10.682194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682299 | orchestrator | 2026-04-11 00:52:10.682305 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-11 00:52:10.682309 | orchestrator | Saturday 11 April 2026 00:48:32 +0000 (0:00:04.281) 0:01:36.945 ******** 2026-04-11 00:52:10.682314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.682318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 00:52:10.682325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682417 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.682421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.682429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 00:52:10.682433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.682479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 00:52:10.682491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682536 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.682540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.682555 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.682559 | orchestrator | 2026-04-11 00:52:10.682563 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-11 00:52:10.682567 | orchestrator | Saturday 11 April 2026 00:48:33 +0000 (0:00:00.971) 0:01:37.917 ******** 2026-04-11 00:52:10.682572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.682580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.682585 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.682616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.682626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.682631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.682636 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.682640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.682644 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.682648 | orchestrator | 2026-04-11 00:52:10.682652 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-11 00:52:10.682656 | orchestrator | Saturday 11 April 2026 00:48:34 +0000 (0:00:01.254) 0:01:39.171 ******** 2026-04-11 00:52:10.682660 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.682664 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.682668 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.682672 | orchestrator | 2026-04-11 00:52:10.682676 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-11 00:52:10.682679 | orchestrator | Saturday 11 April 2026 00:48:35 +0000 (0:00:01.225) 0:01:40.396 ******** 2026-04-11 00:52:10.682683 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.682687 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.682691 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.682695 | orchestrator | 2026-04-11 00:52:10.682699 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-11 00:52:10.682703 | orchestrator | Saturday 11 April 2026 00:48:37 +0000 (0:00:01.828) 0:01:42.225 ******** 2026-04-11 00:52:10.682706 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.682710 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.682714 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.682718 | orchestrator | 2026-04-11 00:52:10.682722 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-11 00:52:10.682726 | orchestrator | Saturday 11 April 2026 00:48:37 +0000 (0:00:00.269) 0:01:42.494 ******** 2026-04-11 00:52:10.682730 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.682735 | orchestrator | 2026-04-11 00:52:10.682738 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-11 00:52:10.682742 | orchestrator | Saturday 11 April 2026 00:48:38 +0000 (0:00:00.865) 0:01:43.360 ******** 2026-04-11 00:52:10.682752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 00:52:10.682766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.682774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 00:52:10.682786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.682794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 00:52:10.682805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.682810 | orchestrator | 2026-04-11 00:52:10.682815 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-11 00:52:10.682818 | orchestrator | Saturday 11 April 2026 00:48:43 +0000 (0:00:05.160) 0:01:48.520 ******** 2026-04-11 00:52:10.682826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 00:52:10.682837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.682842 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.682847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 00:52:10.682873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.682878 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.682883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 00:52:10.682896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.682900 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.682904 | orchestrator | 2026-04-11 00:52:10.682908 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-11 00:52:10.682912 | orchestrator | Saturday 11 April 2026 00:48:47 +0000 (0:00:04.116) 0:01:52.637 ******** 2026-04-11 00:52:10.682916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 00:52:10.682921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 00:52:10.682926 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.682930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 00:52:10.682938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 00:52:10.682942 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.682949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 00:52:10.682954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 00:52:10.682958 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.682962 | orchestrator | 2026-04-11 00:52:10.682966 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-11 00:52:10.682970 | orchestrator | Saturday 11 April 2026 00:48:51 +0000 (0:00:04.133) 0:01:56.771 ******** 2026-04-11 00:52:10.682974 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.682977 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.682981 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.682985 | orchestrator | 2026-04-11 00:52:10.682989 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-11 00:52:10.682996 | orchestrator | Saturday 11 April 2026 00:48:53 +0000 (0:00:01.403) 0:01:58.175 ******** 2026-04-11 00:52:10.683000 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.683003 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.683007 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.683011 | orchestrator | 2026-04-11 00:52:10.683015 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-11 00:52:10.683019 | orchestrator | Saturday 11 April 2026 00:48:55 +0000 (0:00:02.041) 0:02:00.216 ******** 2026-04-11 00:52:10.683023 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683026 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683030 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683034 | orchestrator | 2026-04-11 00:52:10.683038 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-11 00:52:10.683042 | orchestrator | Saturday 11 April 2026 00:48:55 +0000 (0:00:00.308) 0:02:00.524 ******** 2026-04-11 00:52:10.683046 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.683049 | orchestrator | 2026-04-11 00:52:10.683053 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-11 00:52:10.683057 | orchestrator | Saturday 11 April 2026 00:48:56 +0000 (0:00:00.875) 0:02:01.400 ******** 2026-04-11 00:52:10.683062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.683071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.683078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.683082 | orchestrator | 2026-04-11 00:52:10.683086 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-11 00:52:10.683090 | orchestrator | Saturday 11 April 2026 00:48:59 +0000 (0:00:03.377) 0:02:04.778 ******** 2026-04-11 00:52:10.683101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.683105 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.683117 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.683125 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683129 | orchestrator | 2026-04-11 00:52:10.683133 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-11 00:52:10.683137 | orchestrator | Saturday 11 April 2026 00:49:00 +0000 (0:00:00.375) 0:02:05.154 ******** 2026-04-11 00:52:10.683141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.683145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.683149 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.683160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.683164 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.683172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.683176 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683179 | orchestrator | 2026-04-11 00:52:10.683184 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-11 00:52:10.683187 | orchestrator | Saturday 11 April 2026 00:49:00 +0000 (0:00:00.701) 0:02:05.855 ******** 2026-04-11 00:52:10.683191 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.683195 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.683199 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.683227 | orchestrator | 2026-04-11 00:52:10.683231 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-11 00:52:10.683235 | orchestrator | Saturday 11 April 2026 00:49:02 +0000 (0:00:01.355) 0:02:07.211 ******** 2026-04-11 00:52:10.683239 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.683243 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.683246 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.683250 | orchestrator | 2026-04-11 00:52:10.683255 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-11 00:52:10.683267 | orchestrator | Saturday 11 April 2026 00:49:03 +0000 (0:00:01.637) 0:02:08.848 ******** 2026-04-11 00:52:10.683272 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683277 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683281 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683286 | orchestrator | 2026-04-11 00:52:10.683290 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-11 00:52:10.683295 | orchestrator | Saturday 11 April 2026 00:49:04 +0000 (0:00:00.544) 0:02:09.393 ******** 2026-04-11 00:52:10.683299 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.683303 | orchestrator | 2026-04-11 00:52:10.683307 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-11 00:52:10.683312 | orchestrator | Saturday 11 April 2026 00:49:05 +0000 (0:00:00.942) 0:02:10.336 ******** 2026-04-11 00:52:10.683321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:52:10.683330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:52:10.683344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:52:10.683350 | orchestrator | 2026-04-11 00:52:10.683354 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-11 00:52:10.683359 | orchestrator | Saturday 11 April 2026 00:49:09 +0000 (0:00:03.656) 0:02:13.993 ******** 2026-04-11 00:52:10.683368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:52:10.683376 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:52:10.683395 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:52:10.683409 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683413 | orchestrator | 2026-04-11 00:52:10.683417 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-11 00:52:10.683422 | orchestrator | Saturday 11 April 2026 00:49:09 +0000 (0:00:00.758) 0:02:14.751 ******** 2026-04-11 00:52:10.683427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 00:52:10.683446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 00:52:10.683452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 00:52:10.683462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 00:52:10.683466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 00:52:10.683471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 00:52:10.683478 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 00:52:10.683488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 00:52:10.683492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 00:52:10.683497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 00:52:10.683502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 00:52:10.683506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 00:52:10.683510 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 00:52:10.683520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 00:52:10.683525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 00:52:10.683532 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683542 | orchestrator | 2026-04-11 00:52:10.683547 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-11 00:52:10.683552 | orchestrator | Saturday 11 April 2026 00:49:10 +0000 (0:00:00.832) 0:02:15.583 ******** 2026-04-11 00:52:10.683556 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.683561 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.683565 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.683569 | orchestrator | 2026-04-11 00:52:10.683574 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-11 00:52:10.683578 | orchestrator | Saturday 11 April 2026 00:49:11 +0000 (0:00:01.064) 0:02:16.648 ******** 2026-04-11 00:52:10.683583 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.683587 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.683591 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.683596 | orchestrator | 2026-04-11 00:52:10.683600 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-11 00:52:10.683604 | orchestrator | Saturday 11 April 2026 00:49:13 +0000 (0:00:01.870) 0:02:18.519 ******** 2026-04-11 00:52:10.683609 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683613 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683618 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683622 | orchestrator | 2026-04-11 00:52:10.683626 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-11 00:52:10.683631 | orchestrator | Saturday 11 April 2026 00:49:14 +0000 (0:00:00.452) 0:02:18.971 ******** 2026-04-11 00:52:10.683635 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683639 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683643 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683647 | orchestrator | 2026-04-11 00:52:10.683651 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-11 00:52:10.683654 | orchestrator | Saturday 11 April 2026 00:49:14 +0000 (0:00:00.285) 0:02:19.257 ******** 2026-04-11 00:52:10.683661 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.683665 | orchestrator | 2026-04-11 00:52:10.683669 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-11 00:52:10.683673 | orchestrator | Saturday 11 April 2026 00:49:15 +0000 (0:00:00.926) 0:02:20.183 ******** 2026-04-11 00:52:10.683677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:52:10.683682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:52:10.683691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:52:10.683699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:52:10.683705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:52:10.683710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:52:10.683714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:52:10.683722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:52:10.683729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:52:10.683733 | orchestrator | 2026-04-11 00:52:10.683737 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-11 00:52:10.683741 | orchestrator | Saturday 11 April 2026 00:49:18 +0000 (0:00:03.449) 0:02:23.633 ******** 2026-04-11 00:52:10.683747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:52:10.683752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:52:10.683756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:52:10.683764 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:52:10.683777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:52:10.683781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:52:10.683787 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:52:10.683800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:52:10.683811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:52:10.683818 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683825 | orchestrator | 2026-04-11 00:52:10.683830 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-11 00:52:10.683840 | orchestrator | Saturday 11 April 2026 00:49:19 +0000 (0:00:00.617) 0:02:24.251 ******** 2026-04-11 00:52:10.683853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 00:52:10.683860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 00:52:10.683867 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.683873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 00:52:10.683880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 00:52:10.683886 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.683891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 00:52:10.683902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 00:52:10.683908 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.683914 | orchestrator | 2026-04-11 00:52:10.683920 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-11 00:52:10.683927 | orchestrator | Saturday 11 April 2026 00:49:20 +0000 (0:00:00.737) 0:02:24.989 ******** 2026-04-11 00:52:10.683933 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.683939 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.683945 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.683951 | orchestrator | 2026-04-11 00:52:10.683961 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-11 00:52:10.683976 | orchestrator | Saturday 11 April 2026 00:49:21 +0000 (0:00:01.271) 0:02:26.260 ******** 2026-04-11 00:52:10.683983 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.683989 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.683994 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.684000 | orchestrator | 2026-04-11 00:52:10.684006 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-11 00:52:10.684013 | orchestrator | Saturday 11 April 2026 00:49:23 +0000 (0:00:01.668) 0:02:27.929 ******** 2026-04-11 00:52:10.684019 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.684025 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.684032 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.684038 | orchestrator | 2026-04-11 00:52:10.684044 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-11 00:52:10.684052 | orchestrator | Saturday 11 April 2026 00:49:23 +0000 (0:00:00.434) 0:02:28.364 ******** 2026-04-11 00:52:10.684059 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.684063 | orchestrator | 2026-04-11 00:52:10.684067 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-11 00:52:10.684070 | orchestrator | Saturday 11 April 2026 00:49:24 +0000 (0:00:00.932) 0:02:29.297 ******** 2026-04-11 00:52:10.684075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.684086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.684105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.684110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684121 | orchestrator | 2026-04-11 00:52:10.684125 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-11 00:52:10.684129 | orchestrator | Saturday 11 April 2026 00:49:27 +0000 (0:00:03.438) 0:02:32.735 ******** 2026-04-11 00:52:10.684133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.684147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684152 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.684156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.684160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684164 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.684171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.684179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684186 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.684190 | orchestrator | 2026-04-11 00:52:10.684194 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-11 00:52:10.684198 | orchestrator | Saturday 11 April 2026 00:49:28 +0000 (0:00:00.923) 0:02:33.659 ******** 2026-04-11 00:52:10.684240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.684248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.684255 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.684260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.684264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.684268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.684272 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.684276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.684280 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.684284 | orchestrator | 2026-04-11 00:52:10.684288 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-11 00:52:10.684292 | orchestrator | Saturday 11 April 2026 00:49:29 +0000 (0:00:00.837) 0:02:34.497 ******** 2026-04-11 00:52:10.684296 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.684300 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.684303 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.684307 | orchestrator | 2026-04-11 00:52:10.684312 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-11 00:52:10.684315 | orchestrator | Saturday 11 April 2026 00:49:30 +0000 (0:00:01.246) 0:02:35.743 ******** 2026-04-11 00:52:10.684319 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.684323 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.684327 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.684331 | orchestrator | 2026-04-11 00:52:10.684335 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-11 00:52:10.684339 | orchestrator | Saturday 11 April 2026 00:49:32 +0000 (0:00:01.961) 0:02:37.705 ******** 2026-04-11 00:52:10.684347 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.684355 | orchestrator | 2026-04-11 00:52:10.684359 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-11 00:52:10.684363 | orchestrator | Saturday 11 April 2026 00:49:34 +0000 (0:00:01.218) 0:02:38.924 ******** 2026-04-11 00:52:10.684368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.684376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.684400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.684412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684438 | orchestrator | 2026-04-11 00:52:10.684442 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-11 00:52:10.684446 | orchestrator | Saturday 11 April 2026 00:49:38 +0000 (0:00:04.042) 0:02:42.966 ******** 2026-04-11 00:52:10.684857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.684878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684902 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.684916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.684959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.684981 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.684987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.685010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.685016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.685191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.685223 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.685233 | orchestrator | 2026-04-11 00:52:10.685240 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-11 00:52:10.685246 | orchestrator | Saturday 11 April 2026 00:49:38 +0000 (0:00:00.679) 0:02:43.645 ******** 2026-04-11 00:52:10.685252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.685259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.685265 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.685271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.685278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.685284 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.685290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.685304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.685310 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.685316 | orchestrator | 2026-04-11 00:52:10.685322 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-11 00:52:10.685328 | orchestrator | Saturday 11 April 2026 00:49:40 +0000 (0:00:01.355) 0:02:45.001 ******** 2026-04-11 00:52:10.685334 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.685340 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.685346 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.685352 | orchestrator | 2026-04-11 00:52:10.685358 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-11 00:52:10.685362 | orchestrator | Saturday 11 April 2026 00:49:41 +0000 (0:00:01.111) 0:02:46.112 ******** 2026-04-11 00:52:10.685366 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.685370 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.685374 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.685378 | orchestrator | 2026-04-11 00:52:10.685381 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-11 00:52:10.685385 | orchestrator | Saturday 11 April 2026 00:49:43 +0000 (0:00:01.976) 0:02:48.089 ******** 2026-04-11 00:52:10.685394 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.685398 | orchestrator | 2026-04-11 00:52:10.685402 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-11 00:52:10.685405 | orchestrator | Saturday 11 April 2026 00:49:44 +0000 (0:00:01.390) 0:02:49.479 ******** 2026-04-11 00:52:10.685409 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 00:52:10.685413 | orchestrator | 2026-04-11 00:52:10.685417 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-11 00:52:10.685421 | orchestrator | Saturday 11 April 2026 00:49:46 +0000 (0:00:01.550) 0:02:51.030 ******** 2026-04-11 00:52:10.685476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:52:10.685489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 00:52:10.685493 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.685501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:52:10.685526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 00:52:10.685531 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.685535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:52:10.685544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 00:52:10.685548 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.685551 | orchestrator | 2026-04-11 00:52:10.685555 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-11 00:52:10.685559 | orchestrator | Saturday 11 April 2026 00:49:49 +0000 (0:00:03.243) 0:02:54.274 ******** 2026-04-11 00:52:10.685583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:52:10.685606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:52:10.685617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 00:52:10.685622 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.685626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 00:52:10.685630 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.685665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:52:10.685675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 00:52:10.685679 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.685682 | orchestrator | 2026-04-11 00:52:10.685686 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-11 00:52:10.685690 | orchestrator | Saturday 11 April 2026 00:49:52 +0000 (0:00:02.693) 0:02:56.968 ******** 2026-04-11 00:52:10.685697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 00:52:10.685702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 00:52:10.685709 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.685715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 00:52:10.685764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 00:52:10.685781 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.685787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 00:52:10.685794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 00:52:10.685800 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.685806 | orchestrator | 2026-04-11 00:52:10.685811 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-11 00:52:10.685816 | orchestrator | Saturday 11 April 2026 00:49:54 +0000 (0:00:02.627) 0:02:59.595 ******** 2026-04-11 00:52:10.685821 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.685827 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.685834 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.685839 | orchestrator | 2026-04-11 00:52:10.685845 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-11 00:52:10.685851 | orchestrator | Saturday 11 April 2026 00:49:56 +0000 (0:00:01.455) 0:03:01.051 ******** 2026-04-11 00:52:10.685857 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.685863 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.685868 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.685874 | orchestrator | 2026-04-11 00:52:10.685881 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-11 00:52:10.685887 | orchestrator | Saturday 11 April 2026 00:49:57 +0000 (0:00:01.278) 0:03:02.330 ******** 2026-04-11 00:52:10.685895 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.685899 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.685903 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.685907 | orchestrator | 2026-04-11 00:52:10.685911 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-11 00:52:10.685919 | orchestrator | Saturday 11 April 2026 00:49:57 +0000 (0:00:00.272) 0:03:02.602 ******** 2026-04-11 00:52:10.685923 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.685927 | orchestrator | 2026-04-11 00:52:10.685931 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-11 00:52:10.685935 | orchestrator | Saturday 11 April 2026 00:49:58 +0000 (0:00:00.985) 0:03:03.588 ******** 2026-04-11 00:52:10.685939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 00:52:10.685986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 00:52:10.685992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 00:52:10.685996 | orchestrator | 2026-04-11 00:52:10.686000 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-11 00:52:10.686004 | orchestrator | Saturday 11 April 2026 00:50:00 +0000 (0:00:01.684) 0:03:05.272 ******** 2026-04-11 00:52:10.686008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 00:52:10.686047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 00:52:10.686063 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.686077 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.686084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 00:52:10.686090 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.686100 | orchestrator | 2026-04-11 00:52:10.686108 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-11 00:52:10.686114 | orchestrator | Saturday 11 April 2026 00:50:00 +0000 (0:00:00.341) 0:03:05.614 ******** 2026-04-11 00:52:10.686167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 00:52:10.686177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 00:52:10.686183 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.686189 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.686195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 00:52:10.686247 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.686255 | orchestrator | 2026-04-11 00:52:10.686260 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-11 00:52:10.686266 | orchestrator | Saturday 11 April 2026 00:50:01 +0000 (0:00:00.535) 0:03:06.149 ******** 2026-04-11 00:52:10.686272 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.686277 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.686283 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.686290 | orchestrator | 2026-04-11 00:52:10.686296 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-11 00:52:10.686302 | orchestrator | Saturday 11 April 2026 00:50:01 +0000 (0:00:00.550) 0:03:06.699 ******** 2026-04-11 00:52:10.686308 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.686314 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.686320 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.686326 | orchestrator | 2026-04-11 00:52:10.686331 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-11 00:52:10.686338 | orchestrator | Saturday 11 April 2026 00:50:02 +0000 (0:00:01.033) 0:03:07.733 ******** 2026-04-11 00:52:10.686344 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.686350 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.686356 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.686362 | orchestrator | 2026-04-11 00:52:10.686368 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-11 00:52:10.686376 | orchestrator | Saturday 11 April 2026 00:50:03 +0000 (0:00:00.261) 0:03:07.994 ******** 2026-04-11 00:52:10.686380 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.686384 | orchestrator | 2026-04-11 00:52:10.686388 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-11 00:52:10.686398 | orchestrator | Saturday 11 April 2026 00:50:04 +0000 (0:00:01.035) 0:03:09.030 ******** 2026-04-11 00:52:10.686408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.686414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 00:52:10.686479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 00:52:10.686488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.686540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 00:52:10.686561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 00:52:10.686566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.686599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 00:52:10.686606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 00:52:10.686628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.686685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 00:52:10.686693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.686697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.686728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 00:52:10.686739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.686747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 00:52:10.686802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.686809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 00:52:10.686814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.686848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 00:52:10.686876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.686880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 00:52:10.686919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.686923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.686939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.686943 | orchestrator | 2026-04-11 00:52:10.686947 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-11 00:52:10.686950 | orchestrator | Saturday 11 April 2026 00:50:07 +0000 (0:00:03.825) 0:03:12.855 ******** 2026-04-11 00:52:10.686958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.686963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.686970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 00:52:10.686987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 00:52:10.686992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 00:52:10.687015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.687022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.687026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 00:52:10.687045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 00:52:10.687052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 00:52:10.687056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.687086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.687094 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.687109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 00:52:10.687117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.687121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 00:52:10.687132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.687137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 00:52:10.687234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.687268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 00:52:10.687279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.687283 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.687288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 00:52:10.687339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.687358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 00:52:10.687371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 00:52:10.687377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.687388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 00:52:10.687450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 00:52:10.687460 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.687467 | orchestrator | 2026-04-11 00:52:10.687473 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-11 00:52:10.687480 | orchestrator | Saturday 11 April 2026 00:50:09 +0000 (0:00:01.246) 0:03:14.102 ******** 2026-04-11 00:52:10.687486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.687493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.687504 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.687510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.687518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.687524 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.687530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.687537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.687542 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.687548 | orchestrator | 2026-04-11 00:52:10.687553 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-11 00:52:10.687559 | orchestrator | Saturday 11 April 2026 00:50:10 +0000 (0:00:01.292) 0:03:15.394 ******** 2026-04-11 00:52:10.687565 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.687571 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.687577 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.687582 | orchestrator | 2026-04-11 00:52:10.687589 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-11 00:52:10.687595 | orchestrator | Saturday 11 April 2026 00:50:11 +0000 (0:00:01.458) 0:03:16.853 ******** 2026-04-11 00:52:10.687601 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.687606 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.687613 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.687619 | orchestrator | 2026-04-11 00:52:10.687625 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-11 00:52:10.687629 | orchestrator | Saturday 11 April 2026 00:50:13 +0000 (0:00:01.999) 0:03:18.853 ******** 2026-04-11 00:52:10.687639 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.687642 | orchestrator | 2026-04-11 00:52:10.687646 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-11 00:52:10.687654 | orchestrator | Saturday 11 April 2026 00:50:15 +0000 (0:00:01.096) 0:03:19.950 ******** 2026-04-11 00:52:10.687659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 00:52:10.687706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 00:52:10.687713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 00:52:10.687717 | orchestrator | 2026-04-11 00:52:10.687721 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-11 00:52:10.687725 | orchestrator | Saturday 11 April 2026 00:50:18 +0000 (0:00:03.057) 0:03:23.007 ******** 2026-04-11 00:52:10.687732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 00:52:10.687741 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.687777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 00:52:10.687783 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.687788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 00:52:10.687792 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.687796 | orchestrator | 2026-04-11 00:52:10.687800 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-11 00:52:10.687805 | orchestrator | Saturday 11 April 2026 00:50:18 +0000 (0:00:00.873) 0:03:23.881 ******** 2026-04-11 00:52:10.687811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.687819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.687835 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.687843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.687849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.687856 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.687867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.687873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.687879 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.687886 | orchestrator | 2026-04-11 00:52:10.687892 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-11 00:52:10.687897 | orchestrator | Saturday 11 April 2026 00:50:19 +0000 (0:00:00.689) 0:03:24.570 ******** 2026-04-11 00:52:10.687905 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.687909 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.687913 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.687917 | orchestrator | 2026-04-11 00:52:10.687921 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-11 00:52:10.687925 | orchestrator | Saturday 11 April 2026 00:50:20 +0000 (0:00:01.177) 0:03:25.748 ******** 2026-04-11 00:52:10.687929 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.687933 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.687937 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.687941 | orchestrator | 2026-04-11 00:52:10.687944 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-11 00:52:10.687948 | orchestrator | Saturday 11 April 2026 00:50:22 +0000 (0:00:02.048) 0:03:27.796 ******** 2026-04-11 00:52:10.687952 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.687956 | orchestrator | 2026-04-11 00:52:10.687960 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-11 00:52:10.688003 | orchestrator | Saturday 11 April 2026 00:50:24 +0000 (0:00:01.283) 0:03:29.080 ******** 2026-04-11 00:52:10.688009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.688018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.688029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.688080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.688091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.688103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.688187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688248 | orchestrator | 2026-04-11 00:52:10.688255 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-11 00:52:10.688262 | orchestrator | Saturday 11 April 2026 00:50:29 +0000 (0:00:05.009) 0:03:34.089 ******** 2026-04-11 00:52:10.688273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.688328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.688338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688357 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.688364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.688444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.688493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688515 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.688522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.688533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.688540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.688571 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.688580 | orchestrator | 2026-04-11 00:52:10.688584 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-11 00:52:10.688588 | orchestrator | Saturday 11 April 2026 00:50:29 +0000 (0:00:00.677) 0:03:34.767 ******** 2026-04-11 00:52:10.688592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688608 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.688613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688636 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.688645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.688671 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.688679 | orchestrator | 2026-04-11 00:52:10.688684 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-11 00:52:10.688696 | orchestrator | Saturday 11 April 2026 00:50:31 +0000 (0:00:01.309) 0:03:36.077 ******** 2026-04-11 00:52:10.688702 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.688709 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.688714 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.688720 | orchestrator | 2026-04-11 00:52:10.688749 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-11 00:52:10.688754 | orchestrator | Saturday 11 April 2026 00:50:32 +0000 (0:00:01.130) 0:03:37.208 ******** 2026-04-11 00:52:10.688758 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.688762 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.688766 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.688770 | orchestrator | 2026-04-11 00:52:10.688773 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-11 00:52:10.688777 | orchestrator | Saturday 11 April 2026 00:50:34 +0000 (0:00:01.948) 0:03:39.156 ******** 2026-04-11 00:52:10.688781 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.688785 | orchestrator | 2026-04-11 00:52:10.688788 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-11 00:52:10.688792 | orchestrator | Saturday 11 April 2026 00:50:35 +0000 (0:00:01.262) 0:03:40.419 ******** 2026-04-11 00:52:10.688796 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-11 00:52:10.688801 | orchestrator | 2026-04-11 00:52:10.688804 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-11 00:52:10.688808 | orchestrator | Saturday 11 April 2026 00:50:36 +0000 (0:00:01.105) 0:03:41.524 ******** 2026-04-11 00:52:10.688812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 00:52:10.688817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 00:52:10.688821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 00:52:10.688826 | orchestrator | 2026-04-11 00:52:10.688830 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-11 00:52:10.688835 | orchestrator | Saturday 11 April 2026 00:50:40 +0000 (0:00:04.085) 0:03:45.610 ******** 2026-04-11 00:52:10.688842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.688851 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.688855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.688859 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.688876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.688881 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.688885 | orchestrator | 2026-04-11 00:52:10.688889 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-11 00:52:10.688892 | orchestrator | Saturday 11 April 2026 00:50:42 +0000 (0:00:01.371) 0:03:46.982 ******** 2026-04-11 00:52:10.688897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 00:52:10.688902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 00:52:10.688906 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.688910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 00:52:10.688914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 00:52:10.688918 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.688922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 00:52:10.688926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 00:52:10.688930 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.688934 | orchestrator | 2026-04-11 00:52:10.688938 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 00:52:10.688942 | orchestrator | Saturday 11 April 2026 00:50:43 +0000 (0:00:01.785) 0:03:48.767 ******** 2026-04-11 00:52:10.688946 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.688950 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.688953 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.688957 | orchestrator | 2026-04-11 00:52:10.688962 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 00:52:10.688969 | orchestrator | Saturday 11 April 2026 00:50:46 +0000 (0:00:02.364) 0:03:51.131 ******** 2026-04-11 00:52:10.688974 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.688978 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.688983 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.688987 | orchestrator | 2026-04-11 00:52:10.688991 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-11 00:52:10.688998 | orchestrator | Saturday 11 April 2026 00:50:49 +0000 (0:00:03.157) 0:03:54.289 ******** 2026-04-11 00:52:10.689009 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-11 00:52:10.689016 | orchestrator | 2026-04-11 00:52:10.689022 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-11 00:52:10.689028 | orchestrator | Saturday 11 April 2026 00:50:50 +0000 (0:00:00.773) 0:03:55.062 ******** 2026-04-11 00:52:10.689035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.689042 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.689048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.689060 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.689067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.689074 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.689080 | orchestrator | 2026-04-11 00:52:10.689087 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-11 00:52:10.689094 | orchestrator | Saturday 11 April 2026 00:50:51 +0000 (0:00:01.155) 0:03:56.217 ******** 2026-04-11 00:52:10.689101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.689108 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.689113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.689122 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.689127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 00:52:10.689131 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.689136 | orchestrator | 2026-04-11 00:52:10.689140 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-11 00:52:10.689144 | orchestrator | Saturday 11 April 2026 00:50:52 +0000 (0:00:01.106) 0:03:57.324 ******** 2026-04-11 00:52:10.689148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.689153 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.689157 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.689161 | orchestrator | 2026-04-11 00:52:10.689169 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 00:52:10.689173 | orchestrator | Saturday 11 April 2026 00:50:53 +0000 (0:00:01.193) 0:03:58.517 ******** 2026-04-11 00:52:10.689177 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.689181 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.689185 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.689189 | orchestrator | 2026-04-11 00:52:10.689193 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 00:52:10.689196 | orchestrator | Saturday 11 April 2026 00:50:55 +0000 (0:00:02.165) 0:04:00.683 ******** 2026-04-11 00:52:10.689217 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.689221 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.689225 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.689229 | orchestrator | 2026-04-11 00:52:10.689233 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-11 00:52:10.689236 | orchestrator | Saturday 11 April 2026 00:50:58 +0000 (0:00:02.595) 0:04:03.279 ******** 2026-04-11 00:52:10.689241 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-11 00:52:10.689244 | orchestrator | 2026-04-11 00:52:10.689248 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-11 00:52:10.689252 | orchestrator | Saturday 11 April 2026 00:50:59 +0000 (0:00:01.077) 0:04:04.356 ******** 2026-04-11 00:52:10.689281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 00:52:10.689286 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.689290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 00:52:10.689298 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.689302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 00:52:10.689306 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.689310 | orchestrator | 2026-04-11 00:52:10.689314 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-11 00:52:10.689318 | orchestrator | Saturday 11 April 2026 00:51:00 +0000 (0:00:01.069) 0:04:05.426 ******** 2026-04-11 00:52:10.689321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 00:52:10.689326 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.689329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 00:52:10.689333 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.689342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 00:52:10.689346 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.689350 | orchestrator | 2026-04-11 00:52:10.689354 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-11 00:52:10.689358 | orchestrator | Saturday 11 April 2026 00:51:01 +0000 (0:00:01.078) 0:04:06.505 ******** 2026-04-11 00:52:10.689361 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.689365 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.689369 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.689373 | orchestrator | 2026-04-11 00:52:10.689377 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 00:52:10.689380 | orchestrator | Saturday 11 April 2026 00:51:03 +0000 (0:00:01.594) 0:04:08.099 ******** 2026-04-11 00:52:10.689384 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.689388 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.689392 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.689396 | orchestrator | 2026-04-11 00:52:10.689400 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 00:52:10.689404 | orchestrator | Saturday 11 April 2026 00:51:05 +0000 (0:00:02.215) 0:04:10.314 ******** 2026-04-11 00:52:10.689414 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.689420 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.689455 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.689464 | orchestrator | 2026-04-11 00:52:10.689471 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-11 00:52:10.689477 | orchestrator | Saturday 11 April 2026 00:51:08 +0000 (0:00:02.791) 0:04:13.106 ******** 2026-04-11 00:52:10.689483 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.689489 | orchestrator | 2026-04-11 00:52:10.689496 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-11 00:52:10.689501 | orchestrator | Saturday 11 April 2026 00:51:09 +0000 (0:00:01.160) 0:04:14.267 ******** 2026-04-11 00:52:10.689508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 00:52:10.689515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 00:52:10.689522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.689559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 00:52:10.689564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 00:52:10.689569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 00:52:10.689575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 00:52:10.689579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.689612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.689619 | orchestrator | 2026-04-11 00:52:10.689623 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-11 00:52:10.689627 | orchestrator | Saturday 11 April 2026 00:51:12 +0000 (0:00:03.528) 0:04:17.795 ******** 2026-04-11 00:52:10.689634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 00:52:10.689656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 00:52:10.689661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.689673 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.689680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 00:52:10.689688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 00:52:10.689705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.689718 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.689722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 00:52:10.689729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 00:52:10.689737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 00:52:10.689758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 00:52:10.689762 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.689766 | orchestrator | 2026-04-11 00:52:10.689770 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-11 00:52:10.689774 | orchestrator | Saturday 11 April 2026 00:51:13 +0000 (0:00:00.667) 0:04:18.463 ******** 2026-04-11 00:52:10.689778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 00:52:10.689782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 00:52:10.689786 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.689790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 00:52:10.689794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 00:52:10.689798 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.689801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 00:52:10.689805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 00:52:10.689813 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.689817 | orchestrator | 2026-04-11 00:52:10.689821 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-11 00:52:10.689824 | orchestrator | Saturday 11 April 2026 00:51:14 +0000 (0:00:00.836) 0:04:19.299 ******** 2026-04-11 00:52:10.689832 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.689835 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.689839 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.689843 | orchestrator | 2026-04-11 00:52:10.689847 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-11 00:52:10.689851 | orchestrator | Saturday 11 April 2026 00:51:15 +0000 (0:00:01.187) 0:04:20.487 ******** 2026-04-11 00:52:10.689855 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.689858 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.689862 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.689866 | orchestrator | 2026-04-11 00:52:10.689870 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-11 00:52:10.689873 | orchestrator | Saturday 11 April 2026 00:51:17 +0000 (0:00:02.014) 0:04:22.501 ******** 2026-04-11 00:52:10.689877 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.689881 | orchestrator | 2026-04-11 00:52:10.689885 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-11 00:52:10.689889 | orchestrator | Saturday 11 April 2026 00:51:18 +0000 (0:00:01.405) 0:04:23.907 ******** 2026-04-11 00:52:10.689907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.689912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.689917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.689928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:10.689944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:10.689950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:10.689960 | orchestrator | 2026-04-11 00:52:10.689964 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-11 00:52:10.689968 | orchestrator | Saturday 11 April 2026 00:51:23 +0000 (0:00:04.628) 0:04:28.535 ******** 2026-04-11 00:52:10.689972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.689980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:10.689984 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.690002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.690009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:10.690054 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.690067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.690074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:10.690102 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.690110 | orchestrator | 2026-04-11 00:52:10.690115 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-11 00:52:10.690122 | orchestrator | Saturday 11 April 2026 00:51:24 +0000 (0:00:00.781) 0:04:29.316 ******** 2026-04-11 00:52:10.690128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.690134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 00:52:10.690140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 00:52:10.690149 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.690159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.690167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 00:52:10.690174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 00:52:10.690178 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.690182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.690186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 00:52:10.690219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 00:52:10.690228 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.690234 | orchestrator | 2026-04-11 00:52:10.690241 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-11 00:52:10.690247 | orchestrator | Saturday 11 April 2026 00:51:25 +0000 (0:00:01.020) 0:04:30.337 ******** 2026-04-11 00:52:10.690254 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.690261 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.690267 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.690274 | orchestrator | 2026-04-11 00:52:10.690280 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-11 00:52:10.690284 | orchestrator | Saturday 11 April 2026 00:51:25 +0000 (0:00:00.386) 0:04:30.724 ******** 2026-04-11 00:52:10.690288 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.690292 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.690296 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.690300 | orchestrator | 2026-04-11 00:52:10.690304 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-11 00:52:10.690308 | orchestrator | Saturday 11 April 2026 00:51:26 +0000 (0:00:01.061) 0:04:31.785 ******** 2026-04-11 00:52:10.690311 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.690315 | orchestrator | 2026-04-11 00:52:10.690319 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-11 00:52:10.690323 | orchestrator | Saturday 11 April 2026 00:51:28 +0000 (0:00:01.434) 0:04:33.220 ******** 2026-04-11 00:52:10.690346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 00:52:10.690356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 00:52:10.690361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 00:52:10.690368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 00:52:10.690388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 00:52:10.690421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 00:52:10.690437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.690461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 00:52:10.690479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.690489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 00:52:10.690497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:10.690540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 00:52:10.690549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690566 | orchestrator | 2026-04-11 00:52:10.690582 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-11 00:52:10.690586 | orchestrator | Saturday 11 April 2026 00:51:32 +0000 (0:00:03.901) 0:04:37.122 ******** 2026-04-11 00:52:10.690590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 00:52:10.690595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 00:52:10.690599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.690653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 00:52:10.690660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690682 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.690690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 00:52:10.690722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 00:52:10.690729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 00:52:10.690762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.690769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 00:52:10.690775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 00:52:10.690781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690832 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.690838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:10.690848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 00:52:10.690860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 00:52:10.690876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 00:52:10.690882 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.690888 | orchestrator | 2026-04-11 00:52:10.690894 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-11 00:52:10.690900 | orchestrator | Saturday 11 April 2026 00:51:32 +0000 (0:00:00.712) 0:04:37.834 ******** 2026-04-11 00:52:10.690906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 00:52:10.690913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 00:52:10.690922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.690928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.690935 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.690941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 00:52:10.690957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 00:52:10.690963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.690969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.690975 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.690982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 00:52:10.690993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 00:52:10.691000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.691007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 00:52:10.691014 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691020 | orchestrator | 2026-04-11 00:52:10.691026 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-11 00:52:10.691032 | orchestrator | Saturday 11 April 2026 00:51:34 +0000 (0:00:01.127) 0:04:38.962 ******** 2026-04-11 00:52:10.691040 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691044 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691048 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691052 | orchestrator | 2026-04-11 00:52:10.691055 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-11 00:52:10.691059 | orchestrator | Saturday 11 April 2026 00:51:34 +0000 (0:00:00.401) 0:04:39.364 ******** 2026-04-11 00:52:10.691063 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691067 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691070 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691074 | orchestrator | 2026-04-11 00:52:10.691078 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-11 00:52:10.691086 | orchestrator | Saturday 11 April 2026 00:51:35 +0000 (0:00:01.107) 0:04:40.471 ******** 2026-04-11 00:52:10.691090 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.691094 | orchestrator | 2026-04-11 00:52:10.691098 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-11 00:52:10.691102 | orchestrator | Saturday 11 April 2026 00:51:36 +0000 (0:00:01.304) 0:04:41.775 ******** 2026-04-11 00:52:10.691112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:52:10.691118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:52:10.691126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 00:52:10.691130 | orchestrator | 2026-04-11 00:52:10.691134 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-11 00:52:10.691138 | orchestrator | Saturday 11 April 2026 00:51:39 +0000 (0:00:02.485) 0:04:44.261 ******** 2026-04-11 00:52:10.691142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:52:10.691153 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:52:10.691164 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 00:52:10.691175 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691179 | orchestrator | 2026-04-11 00:52:10.691183 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-11 00:52:10.691187 | orchestrator | Saturday 11 April 2026 00:51:39 +0000 (0:00:00.356) 0:04:44.617 ******** 2026-04-11 00:52:10.691191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 00:52:10.691195 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 00:52:10.691243 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 00:52:10.691255 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691267 | orchestrator | 2026-04-11 00:52:10.691272 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-11 00:52:10.691279 | orchestrator | Saturday 11 April 2026 00:51:40 +0000 (0:00:00.543) 0:04:45.161 ******** 2026-04-11 00:52:10.691285 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691292 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691299 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691305 | orchestrator | 2026-04-11 00:52:10.691312 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-11 00:52:10.691318 | orchestrator | Saturday 11 April 2026 00:51:40 +0000 (0:00:00.432) 0:04:45.593 ******** 2026-04-11 00:52:10.691324 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691331 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691335 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691339 | orchestrator | 2026-04-11 00:52:10.691343 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-11 00:52:10.691346 | orchestrator | Saturday 11 April 2026 00:51:41 +0000 (0:00:01.198) 0:04:46.792 ******** 2026-04-11 00:52:10.691350 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.691354 | orchestrator | 2026-04-11 00:52:10.691358 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-11 00:52:10.691361 | orchestrator | Saturday 11 April 2026 00:51:43 +0000 (0:00:01.528) 0:04:48.321 ******** 2026-04-11 00:52:10.691367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 00:52:10.691373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 00:52:10.691382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 00:52:10.691392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 00:52:10.691549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 00:52:10.691571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 00:52:10.691575 | orchestrator | 2026-04-11 00:52:10.691579 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-11 00:52:10.691588 | orchestrator | Saturday 11 April 2026 00:51:49 +0000 (0:00:05.593) 0:04:53.915 ******** 2026-04-11 00:52:10.691598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 00:52:10.691603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 00:52:10.691607 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 00:52:10.691621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 00:52:10.691630 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 00:52:10.691638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 00:52:10.691642 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691646 | orchestrator | 2026-04-11 00:52:10.691650 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-11 00:52:10.691654 | orchestrator | Saturday 11 April 2026 00:51:49 +0000 (0:00:00.836) 0:04:54.752 ******** 2026-04-11 00:52:10.691658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 00:52:10.691666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 00:52:10.691671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.691676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.691683 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 00:52:10.691694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 00:52:10.691698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.691702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.691706 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 00:52:10.691714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 00:52:10.691718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.691722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 00:52:10.691726 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691730 | orchestrator | 2026-04-11 00:52:10.691734 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-11 00:52:10.691738 | orchestrator | Saturday 11 April 2026 00:51:50 +0000 (0:00:01.101) 0:04:55.853 ******** 2026-04-11 00:52:10.691741 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.691745 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.691749 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.691753 | orchestrator | 2026-04-11 00:52:10.691757 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-11 00:52:10.691760 | orchestrator | Saturday 11 April 2026 00:51:52 +0000 (0:00:01.158) 0:04:57.012 ******** 2026-04-11 00:52:10.691764 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:10.691768 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:10.691772 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:10.691775 | orchestrator | 2026-04-11 00:52:10.691779 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-11 00:52:10.691783 | orchestrator | Saturday 11 April 2026 00:51:53 +0000 (0:00:01.894) 0:04:58.906 ******** 2026-04-11 00:52:10.691787 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691791 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691794 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691798 | orchestrator | 2026-04-11 00:52:10.691802 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-11 00:52:10.691809 | orchestrator | Saturday 11 April 2026 00:51:54 +0000 (0:00:00.263) 0:04:59.170 ******** 2026-04-11 00:52:10.691816 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691820 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691824 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691828 | orchestrator | 2026-04-11 00:52:10.691831 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-11 00:52:10.691835 | orchestrator | Saturday 11 April 2026 00:51:54 +0000 (0:00:00.454) 0:04:59.624 ******** 2026-04-11 00:52:10.691839 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691843 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691846 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691850 | orchestrator | 2026-04-11 00:52:10.691854 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-11 00:52:10.691858 | orchestrator | Saturday 11 April 2026 00:51:54 +0000 (0:00:00.263) 0:04:59.888 ******** 2026-04-11 00:52:10.691861 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691865 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691869 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691873 | orchestrator | 2026-04-11 00:52:10.691876 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-11 00:52:10.691880 | orchestrator | Saturday 11 April 2026 00:51:55 +0000 (0:00:00.276) 0:05:00.165 ******** 2026-04-11 00:52:10.691884 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.691888 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.691891 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.691895 | orchestrator | 2026-04-11 00:52:10.691899 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-11 00:52:10.691903 | orchestrator | Saturday 11 April 2026 00:51:55 +0000 (0:00:00.264) 0:05:00.430 ******** 2026-04-11 00:52:10.691907 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:10.691910 | orchestrator | 2026-04-11 00:52:10.691914 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-11 00:52:10.691921 | orchestrator | Saturday 11 April 2026 00:51:57 +0000 (0:00:01.567) 0:05:01.997 ******** 2026-04-11 00:52:10.691926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.691930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.691934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 00:52:10.691942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.691949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.691953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 00:52:10.691960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.691967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.691973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 00:52:10.691979 | orchestrator | 2026-04-11 00:52:10.691988 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-11 00:52:10.692003 | orchestrator | Saturday 11 April 2026 00:51:59 +0000 (0:00:02.340) 0:05:04.338 ******** 2026-04-11 00:52:10.692009 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:52:10.692016 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:10.692022 | orchestrator | } 2026-04-11 00:52:10.692028 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:52:10.692034 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:10.692040 | orchestrator | } 2026-04-11 00:52:10.692047 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:52:10.692053 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:10.692059 | orchestrator | } 2026-04-11 00:52:10.692066 | orchestrator | 2026-04-11 00:52:10.692072 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:52:10.692078 | orchestrator | Saturday 11 April 2026 00:51:59 +0000 (0:00:00.339) 0:05:04.678 ******** 2026-04-11 00:52:10.692084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.692095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.692103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.692109 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:10.692120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.692127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.692138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.692145 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:10.692152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 00:52:10.692164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 00:52:10.692171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 00:52:10.692178 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:10.692183 | orchestrator | 2026-04-11 00:52:10.692190 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-11 00:52:10.692195 | orchestrator | Saturday 11 April 2026 00:52:01 +0000 (0:00:01.633) 0:05:06.312 ******** 2026-04-11 00:52:10.692226 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.692233 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.692239 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.692245 | orchestrator | 2026-04-11 00:52:10.692251 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-11 00:52:10.692262 | orchestrator | Saturday 11 April 2026 00:52:02 +0000 (0:00:00.953) 0:05:07.265 ******** 2026-04-11 00:52:10.692268 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.692275 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.692281 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.692287 | orchestrator | 2026-04-11 00:52:10.692292 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-11 00:52:10.692298 | orchestrator | Saturday 11 April 2026 00:52:02 +0000 (0:00:00.341) 0:05:07.607 ******** 2026-04-11 00:52:10.692305 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.692310 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.692322 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.692329 | orchestrator | 2026-04-11 00:52:10.692335 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-11 00:52:10.692341 | orchestrator | Saturday 11 April 2026 00:52:03 +0000 (0:00:00.889) 0:05:08.496 ******** 2026-04-11 00:52:10.692347 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.692353 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.692360 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.692369 | orchestrator | 2026-04-11 00:52:10.692376 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-11 00:52:10.692382 | orchestrator | Saturday 11 April 2026 00:52:04 +0000 (0:00:00.916) 0:05:09.413 ******** 2026-04-11 00:52:10.692388 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:10.692393 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:10.692399 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:10.692404 | orchestrator | 2026-04-11 00:52:10.692410 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-11 00:52:10.692416 | orchestrator | Saturday 11 April 2026 00:52:05 +0000 (0:00:01.227) 0:05:10.640 ******** 2026-04-11 00:52:10.692427 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_k5h_9kn1/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_k5h_9kn1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_k5h_9kn1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_k5h_9kn1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-11 00:52:10.692442 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ahz9lfxt/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ahz9lfxt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ahz9lfxt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ahz9lfxt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-11 00:52:10.692467 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_gi8y_h8k/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_gi8y_h8k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_gi8y_h8k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_gi8y_h8k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-11 00:52:10.692480 | orchestrator | 2026-04-11 00:52:10.692487 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:52:10.692494 | orchestrator | testbed-node-0 : ok=120  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-11 00:52:10.692501 | orchestrator | testbed-node-1 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-11 00:52:10.692508 | orchestrator | testbed-node-2 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-11 00:52:10.692512 | orchestrator | 2026-04-11 00:52:10.692516 | orchestrator | 2026-04-11 00:52:10.692520 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:52:10.692524 | orchestrator | Saturday 11 April 2026 00:52:08 +0000 (0:00:02.824) 0:05:13.465 ******** 2026-04-11 00:52:10.692527 | orchestrator | =============================================================================== 2026-04-11 00:52:10.692531 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.59s 2026-04-11 00:52:10.692535 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.16s 2026-04-11 00:52:10.692539 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.01s 2026-04-11 00:52:10.692543 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.63s 2026-04-11 00:52:10.692546 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.28s 2026-04-11 00:52:10.692550 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.13s 2026-04-11 00:52:10.692554 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.13s 2026-04-11 00:52:10.692558 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.12s 2026-04-11 00:52:10.692562 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2026-04-11 00:52:10.692565 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.04s 2026-04-11 00:52:10.692569 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.90s 2026-04-11 00:52:10.692573 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.83s 2026-04-11 00:52:10.692577 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.83s 2026-04-11 00:52:10.692580 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.66s 2026-04-11 00:52:10.692584 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.53s 2026-04-11 00:52:10.692588 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.48s 2026-04-11 00:52:10.692592 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.45s 2026-04-11 00:52:10.692596 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.44s 2026-04-11 00:52:10.692599 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.38s 2026-04-11 00:52:10.692606 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.35s 2026-04-11 00:52:10.692610 | orchestrator | 2026-04-11 00:52:10 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:10.692614 | orchestrator | 2026-04-11 00:52:10 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:10.692625 | orchestrator | 2026-04-11 00:52:10 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:10.692636 | orchestrator | 2026-04-11 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:13.733795 | orchestrator | 2026-04-11 00:52:13 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:13.734948 | orchestrator | 2026-04-11 00:52:13 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:13.738883 | orchestrator | 2026-04-11 00:52:13 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:13.738957 | orchestrator | 2026-04-11 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:16.780037 | orchestrator | 2026-04-11 00:52:16 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:16.780944 | orchestrator | 2026-04-11 00:52:16 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:16.782270 | orchestrator | 2026-04-11 00:52:16 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:16.782530 | orchestrator | 2026-04-11 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:19.813841 | orchestrator | 2026-04-11 00:52:19 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:19.813918 | orchestrator | 2026-04-11 00:52:19 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:19.814957 | orchestrator | 2026-04-11 00:52:19 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:19.816203 | orchestrator | 2026-04-11 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:22.840532 | orchestrator | 2026-04-11 00:52:22 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:22.843884 | orchestrator | 2026-04-11 00:52:22 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:22.844553 | orchestrator | 2026-04-11 00:52:22 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:22.844602 | orchestrator | 2026-04-11 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:25.879877 | orchestrator | 2026-04-11 00:52:25 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:25.945822 | orchestrator | 2026-04-11 00:52:25 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:25.945888 | orchestrator | 2026-04-11 00:52:25 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:25.945898 | orchestrator | 2026-04-11 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:28.920773 | orchestrator | 2026-04-11 00:52:28 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:28.921082 | orchestrator | 2026-04-11 00:52:28 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:28.922186 | orchestrator | 2026-04-11 00:52:28 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:28.922251 | orchestrator | 2026-04-11 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:31.957404 | orchestrator | 2026-04-11 00:52:31 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:31.957510 | orchestrator | 2026-04-11 00:52:31 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:31.957552 | orchestrator | 2026-04-11 00:52:31 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:31.957559 | orchestrator | 2026-04-11 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:34.978221 | orchestrator | 2026-04-11 00:52:34 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:34.982497 | orchestrator | 2026-04-11 00:52:34 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:34.982635 | orchestrator | 2026-04-11 00:52:34 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state STARTED 2026-04-11 00:52:34.982678 | orchestrator | 2026-04-11 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:38.015733 | orchestrator | 2026-04-11 00:52:38 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:38.017147 | orchestrator | 2026-04-11 00:52:38 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:38.020497 | orchestrator | 2026-04-11 00:52:38 | INFO  | Task 221d135b-8d19-4084-a5e0-9f4aad2fe355 is in state SUCCESS 2026-04-11 00:52:38.020685 | orchestrator | 2026-04-11 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:38.022298 | orchestrator | 2026-04-11 00:52:38.022362 | orchestrator | 2026-04-11 00:52:38.022377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:52:38.022389 | orchestrator | 2026-04-11 00:52:38.022402 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:52:38.022415 | orchestrator | Saturday 11 April 2026 00:52:12 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-04-11 00:52:38.022428 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:52:38.022437 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:52:38.022445 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:52:38.022637 | orchestrator | 2026-04-11 00:52:38.022648 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:52:38.022657 | orchestrator | Saturday 11 April 2026 00:52:12 +0000 (0:00:00.276) 0:00:00.583 ******** 2026-04-11 00:52:38.022665 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-11 00:52:38.022673 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-11 00:52:38.022680 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-11 00:52:38.022689 | orchestrator | 2026-04-11 00:52:38.022696 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-11 00:52:38.022704 | orchestrator | 2026-04-11 00:52:38.022711 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 00:52:38.022719 | orchestrator | Saturday 11 April 2026 00:52:13 +0000 (0:00:00.296) 0:00:00.880 ******** 2026-04-11 00:52:38.022726 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:38.022734 | orchestrator | 2026-04-11 00:52:38.022741 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-11 00:52:38.022749 | orchestrator | Saturday 11 April 2026 00:52:13 +0000 (0:00:00.655) 0:00:01.535 ******** 2026-04-11 00:52:38.022756 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 00:52:38.022764 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 00:52:38.022771 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 00:52:38.022779 | orchestrator | 2026-04-11 00:52:38.022786 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-11 00:52:38.022793 | orchestrator | Saturday 11 April 2026 00:52:14 +0000 (0:00:01.095) 0:00:02.631 ******** 2026-04-11 00:52:38.022804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.022838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.022884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.022896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.022906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.022926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.022934 | orchestrator | 2026-04-11 00:52:38.022942 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 00:52:38.022950 | orchestrator | Saturday 11 April 2026 00:52:16 +0000 (0:00:01.623) 0:00:04.254 ******** 2026-04-11 00:52:38.022964 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:52:38.022972 | orchestrator | 2026-04-11 00:52:38.022979 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-11 00:52:38.022986 | orchestrator | Saturday 11 April 2026 00:52:17 +0000 (0:00:00.460) 0:00:04.714 ******** 2026-04-11 00:52:38.022994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023071 | orchestrator | 2026-04-11 00:52:38.023083 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-11 00:52:38.023094 | orchestrator | Saturday 11 April 2026 00:52:19 +0000 (0:00:02.784) 0:00:07.499 ******** 2026-04-11 00:52:38.023106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023172 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:38.023184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023235 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:38.023256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023268 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:38.023280 | orchestrator | 2026-04-11 00:52:38.023291 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-11 00:52:38.023321 | orchestrator | Saturday 11 April 2026 00:52:20 +0000 (0:00:00.681) 0:00:08.181 ******** 2026-04-11 00:52:38.023333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023378 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:38.023400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023420 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:38.023432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023460 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:38.023474 | orchestrator | 2026-04-11 00:52:38.023487 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-11 00:52:38.023500 | orchestrator | Saturday 11 April 2026 00:52:21 +0000 (0:00:01.060) 0:00:09.241 ******** 2026-04-11 00:52:38.023521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023600 | orchestrator | 2026-04-11 00:52:38.023608 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-11 00:52:38.023617 | orchestrator | Saturday 11 April 2026 00:52:24 +0000 (0:00:02.485) 0:00:11.726 ******** 2026-04-11 00:52:38.023626 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:38.023635 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:38.023642 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:38.023649 | orchestrator | 2026-04-11 00:52:38.023657 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-11 00:52:38.023664 | orchestrator | Saturday 11 April 2026 00:52:26 +0000 (0:00:02.651) 0:00:14.378 ******** 2026-04-11 00:52:38.023671 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:52:38.023678 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:52:38.023685 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:52:38.023693 | orchestrator | 2026-04-11 00:52:38.023700 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-11 00:52:38.023707 | orchestrator | Saturday 11 April 2026 00:52:28 +0000 (0:00:01.386) 0:00:15.765 ******** 2026-04-11 00:52:38.023715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 00:52:38.023753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 00:52:38.023792 | orchestrator | 2026-04-11 00:52:38.023800 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-11 00:52:38.023808 | orchestrator | Saturday 11 April 2026 00:52:30 +0000 (0:00:01.953) 0:00:17.718 ******** 2026-04-11 00:52:38.023815 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:52:38.023822 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:38.023830 | orchestrator | } 2026-04-11 00:52:38.023838 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:52:38.023845 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:38.023852 | orchestrator | } 2026-04-11 00:52:38.023859 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:52:38.023866 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:52:38.023874 | orchestrator | } 2026-04-11 00:52:38.023881 | orchestrator | 2026-04-11 00:52:38.023888 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:52:38.023895 | orchestrator | Saturday 11 April 2026 00:52:30 +0000 (0:00:00.390) 0:00:18.108 ******** 2026-04-11 00:52:38.023903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023919 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:38.023930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023958 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:38.023965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 00:52:38.023974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 00:52:38.023987 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:38.023994 | orchestrator | 2026-04-11 00:52:38.024001 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 00:52:38.024009 | orchestrator | Saturday 11 April 2026 00:52:31 +0000 (0:00:00.730) 0:00:18.839 ******** 2026-04-11 00:52:38.024016 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:38.024023 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:52:38.024030 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:52:38.024038 | orchestrator | 2026-04-11 00:52:38.024047 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 00:52:38.024063 | orchestrator | Saturday 11 April 2026 00:52:31 +0000 (0:00:00.259) 0:00:19.099 ******** 2026-04-11 00:52:38.024079 | orchestrator | 2026-04-11 00:52:38.024094 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 00:52:38.024109 | orchestrator | Saturday 11 April 2026 00:52:31 +0000 (0:00:00.059) 0:00:19.159 ******** 2026-04-11 00:52:38.024147 | orchestrator | 2026-04-11 00:52:38.024159 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 00:52:38.024172 | orchestrator | Saturday 11 April 2026 00:52:31 +0000 (0:00:00.078) 0:00:19.237 ******** 2026-04-11 00:52:38.024184 | orchestrator | 2026-04-11 00:52:38.024195 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-11 00:52:38.024214 | orchestrator | Saturday 11 April 2026 00:52:31 +0000 (0:00:00.083) 0:00:19.321 ******** 2026-04-11 00:52:38.024225 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:38.024236 | orchestrator | 2026-04-11 00:52:38.024246 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-11 00:52:38.024257 | orchestrator | Saturday 11 April 2026 00:52:32 +0000 (0:00:00.842) 0:00:20.163 ******** 2026-04-11 00:52:38.024268 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:52:38.024279 | orchestrator | 2026-04-11 00:52:38.024289 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-11 00:52:38.024301 | orchestrator | Saturday 11 April 2026 00:52:32 +0000 (0:00:00.176) 0:00:20.339 ******** 2026-04-11 00:52:38.024314 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_o4oowga9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_o4oowga9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_o4oowga9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_o4oowga9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-11 00:52:38.024354 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jpvzwqba/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jpvzwqba/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_jpvzwqba/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jpvzwqba/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-11 00:52:38.024373 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__n50h85p/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__n50h85p/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload__n50h85p/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__n50h85p/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-11 00:52:38.024392 | orchestrator | 2026-04-11 00:52:38.024406 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:52:38.024425 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-11 00:52:38.024437 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 00:52:38.024448 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 00:52:38.024461 | orchestrator | 2026-04-11 00:52:38.024473 | orchestrator | 2026-04-11 00:52:38.024485 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:52:38.024497 | orchestrator | Saturday 11 April 2026 00:52:35 +0000 (0:00:02.917) 0:00:23.256 ******** 2026-04-11 00:52:38.024509 | orchestrator | =============================================================================== 2026-04-11 00:52:38.024521 | orchestrator | opensearch : Restart opensearch container ------------------------------- 2.92s 2026-04-11 00:52:38.024533 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.78s 2026-04-11 00:52:38.024545 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.65s 2026-04-11 00:52:38.024557 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.49s 2026-04-11 00:52:38.024568 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 1.95s 2026-04-11 00:52:38.024579 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.62s 2026-04-11 00:52:38.024590 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.39s 2026-04-11 00:52:38.024602 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.10s 2026-04-11 00:52:38.024613 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2026-04-11 00:52:38.024625 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.84s 2026-04-11 00:52:38.024638 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.73s 2026-04-11 00:52:38.024649 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.68s 2026-04-11 00:52:38.024671 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-04-11 00:52:38.024679 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2026-04-11 00:52:38.024686 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.39s 2026-04-11 00:52:38.024693 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2026-04-11 00:52:38.024701 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-11 00:52:38.024708 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.26s 2026-04-11 00:52:38.024716 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.22s 2026-04-11 00:52:38.024723 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.18s 2026-04-11 00:52:41.057599 | orchestrator | 2026-04-11 00:52:41 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:41.059016 | orchestrator | 2026-04-11 00:52:41 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:41.059061 | orchestrator | 2026-04-11 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:44.090708 | orchestrator | 2026-04-11 00:52:44 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:44.091455 | orchestrator | 2026-04-11 00:52:44 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:44.091605 | orchestrator | 2026-04-11 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:47.130586 | orchestrator | 2026-04-11 00:52:47 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:47.133409 | orchestrator | 2026-04-11 00:52:47 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:47.133483 | orchestrator | 2026-04-11 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:50.170439 | orchestrator | 2026-04-11 00:52:50 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:50.170514 | orchestrator | 2026-04-11 00:52:50 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:50.170522 | orchestrator | 2026-04-11 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:53.208340 | orchestrator | 2026-04-11 00:52:53 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:53.210768 | orchestrator | 2026-04-11 00:52:53 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:53.210839 | orchestrator | 2026-04-11 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:56.245035 | orchestrator | 2026-04-11 00:52:56 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:56.246151 | orchestrator | 2026-04-11 00:52:56 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:56.246186 | orchestrator | 2026-04-11 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:52:59.287514 | orchestrator | 2026-04-11 00:52:59 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:52:59.289582 | orchestrator | 2026-04-11 00:52:59 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:52:59.289647 | orchestrator | 2026-04-11 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:02.333597 | orchestrator | 2026-04-11 00:53:02 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:02.335368 | orchestrator | 2026-04-11 00:53:02 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:53:02.335457 | orchestrator | 2026-04-11 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:05.380839 | orchestrator | 2026-04-11 00:53:05 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:05.383292 | orchestrator | 2026-04-11 00:53:05 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:53:05.383343 | orchestrator | 2026-04-11 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:08.440279 | orchestrator | 2026-04-11 00:53:08 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:08.442721 | orchestrator | 2026-04-11 00:53:08 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:53:08.442811 | orchestrator | 2026-04-11 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:11.495470 | orchestrator | 2026-04-11 00:53:11 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:11.498089 | orchestrator | 2026-04-11 00:53:11 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:53:11.498154 | orchestrator | 2026-04-11 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:14.546208 | orchestrator | 2026-04-11 00:53:14 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:14.551529 | orchestrator | 2026-04-11 00:53:14 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:53:14.551633 | orchestrator | 2026-04-11 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:17.592224 | orchestrator | 2026-04-11 00:53:17 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:17.593472 | orchestrator | 2026-04-11 00:53:17 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:53:17.593542 | orchestrator | 2026-04-11 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:20.636868 | orchestrator | 2026-04-11 00:53:20 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:20.638470 | orchestrator | 2026-04-11 00:53:20 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state STARTED 2026-04-11 00:53:20.638536 | orchestrator | 2026-04-11 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:23.682878 | orchestrator | 2026-04-11 00:53:23 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:23.686943 | orchestrator | 2026-04-11 00:53:23 | INFO  | Task 9d85c169-107b-4538-88fa-d5d29433ec6f is in state SUCCESS 2026-04-11 00:53:23.687196 | orchestrator | 2026-04-11 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:23.687829 | orchestrator | 2026-04-11 00:53:23.687858 | orchestrator | 2026-04-11 00:53:23.687869 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-11 00:53:23.687879 | orchestrator | 2026-04-11 00:53:23.687890 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-11 00:53:23.687914 | orchestrator | Saturday 11 April 2026 00:52:12 +0000 (0:00:00.102) 0:00:00.102 ******** 2026-04-11 00:53:23.687925 | orchestrator | ok: [localhost] => { 2026-04-11 00:53:23.687937 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-11 00:53:23.688401 | orchestrator | } 2026-04-11 00:53:23.688421 | orchestrator | 2026-04-11 00:53:23.688446 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-11 00:53:23.688456 | orchestrator | Saturday 11 April 2026 00:52:12 +0000 (0:00:00.041) 0:00:00.144 ******** 2026-04-11 00:53:23.688467 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-11 00:53:23.688504 | orchestrator | ...ignoring 2026-04-11 00:53:23.688515 | orchestrator | 2026-04-11 00:53:23.688525 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-11 00:53:23.688535 | orchestrator | Saturday 11 April 2026 00:52:15 +0000 (0:00:02.979) 0:00:03.123 ******** 2026-04-11 00:53:23.688545 | orchestrator | skipping: [localhost] 2026-04-11 00:53:23.688554 | orchestrator | 2026-04-11 00:53:23.688564 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-11 00:53:23.688574 | orchestrator | Saturday 11 April 2026 00:52:15 +0000 (0:00:00.050) 0:00:03.174 ******** 2026-04-11 00:53:23.688584 | orchestrator | ok: [localhost] 2026-04-11 00:53:23.688593 | orchestrator | 2026-04-11 00:53:23.688603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:53:23.688613 | orchestrator | 2026-04-11 00:53:23.688623 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:53:23.688632 | orchestrator | Saturday 11 April 2026 00:52:15 +0000 (0:00:00.188) 0:00:03.363 ******** 2026-04-11 00:53:23.688642 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:23.688652 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:23.688661 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:23.688671 | orchestrator | 2026-04-11 00:53:23.688681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:53:23.688691 | orchestrator | Saturday 11 April 2026 00:52:16 +0000 (0:00:00.299) 0:00:03.663 ******** 2026-04-11 00:53:23.688700 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-11 00:53:23.688711 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-11 00:53:23.688720 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-11 00:53:23.688731 | orchestrator | 2026-04-11 00:53:23.688741 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-11 00:53:23.688750 | orchestrator | 2026-04-11 00:53:23.688760 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-11 00:53:23.688770 | orchestrator | Saturday 11 April 2026 00:52:16 +0000 (0:00:00.397) 0:00:04.060 ******** 2026-04-11 00:53:23.688780 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 00:53:23.688789 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 00:53:23.688799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 00:53:23.688809 | orchestrator | 2026-04-11 00:53:23.688818 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 00:53:23.688828 | orchestrator | Saturday 11 April 2026 00:52:16 +0000 (0:00:00.352) 0:00:04.413 ******** 2026-04-11 00:53:23.688838 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:53:23.688848 | orchestrator | 2026-04-11 00:53:23.688858 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-11 00:53:23.688867 | orchestrator | Saturday 11 April 2026 00:52:17 +0000 (0:00:00.640) 0:00:05.053 ******** 2026-04-11 00:53:23.688923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.688995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.689051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.689082 | orchestrator | 2026-04-11 00:53:23.689099 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-11 00:53:23.689114 | orchestrator | Saturday 11 April 2026 00:52:20 +0000 (0:00:02.824) 0:00:07.878 ******** 2026-04-11 00:53:23.689128 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.689146 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.689162 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:53:23.689178 | orchestrator | 2026-04-11 00:53:23.689194 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-11 00:53:23.689221 | orchestrator | Saturday 11 April 2026 00:52:20 +0000 (0:00:00.698) 0:00:08.576 ******** 2026-04-11 00:53:23.689237 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.689251 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.689267 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:53:23.689281 | orchestrator | 2026-04-11 00:53:23.689296 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-11 00:53:23.689313 | orchestrator | Saturday 11 April 2026 00:52:22 +0000 (0:00:01.548) 0:00:10.125 ******** 2026-04-11 00:53:23.689331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.689364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.689401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.689421 | orchestrator | 2026-04-11 00:53:23.689437 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-11 00:53:23.689454 | orchestrator | Saturday 11 April 2026 00:52:26 +0000 (0:00:03.764) 0:00:13.889 ******** 2026-04-11 00:53:23.689470 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.689486 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.689503 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:53:23.689518 | orchestrator | 2026-04-11 00:53:23.689534 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-11 00:53:23.689555 | orchestrator | Saturday 11 April 2026 00:52:27 +0000 (0:00:01.049) 0:00:14.939 ******** 2026-04-11 00:53:23.689579 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:53:23.689594 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:53:23.689621 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:53:23.689637 | orchestrator | 2026-04-11 00:53:23.689652 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 00:53:23.689668 | orchestrator | Saturday 11 April 2026 00:52:30 +0000 (0:00:03.433) 0:00:18.373 ******** 2026-04-11 00:53:23.689683 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:53:23.689698 | orchestrator | 2026-04-11 00:53:23.689714 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-11 00:53:23.689728 | orchestrator | Saturday 11 April 2026 00:52:31 +0000 (0:00:00.463) 0:00:18.836 ******** 2026-04-11 00:53:23.689806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.689827 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.689844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.689872 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.689908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.689928 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.690110 | orchestrator | 2026-04-11 00:53:23.690131 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-11 00:53:23.690141 | orchestrator | Saturday 11 April 2026 00:52:33 +0000 (0:00:02.470) 0:00:21.306 ******** 2026-04-11 00:53:23.690152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690185 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.690217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690235 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.690246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690268 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.690278 | orchestrator | 2026-04-11 00:53:23.690288 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-11 00:53:23.690298 | orchestrator | Saturday 11 April 2026 00:52:35 +0000 (0:00:02.265) 0:00:23.572 ******** 2026-04-11 00:53:23.690327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690338 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.690354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690371 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.690382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690392 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.690402 | orchestrator | 2026-04-11 00:53:23.690417 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-11 00:53:23.690427 | orchestrator | Saturday 11 April 2026 00:52:38 +0000 (0:00:02.084) 0:00:25.656 ******** 2026-04-11 00:53:23.690438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.690533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.690573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 00:53:23.690590 | orchestrator | 2026-04-11 00:53:23.690599 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-11 00:53:23.690607 | orchestrator | Saturday 11 April 2026 00:52:40 +0000 (0:00:02.564) 0:00:28.221 ******** 2026-04-11 00:53:23.690615 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:53:23.690623 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:53:23.690631 | orchestrator | } 2026-04-11 00:53:23.690639 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:53:23.690647 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:53:23.690655 | orchestrator | } 2026-04-11 00:53:23.690663 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:53:23.690671 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:53:23.690679 | orchestrator | } 2026-04-11 00:53:23.690687 | orchestrator | 2026-04-11 00:53:23.690695 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:53:23.690703 | orchestrator | Saturday 11 April 2026 00:52:40 +0000 (0:00:00.275) 0:00:28.497 ******** 2026-04-11 00:53:23.690712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690721 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.690741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690755 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.690764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.690772 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.690781 | orchestrator | 2026-04-11 00:53:23.690788 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-11 00:53:23.690797 | orchestrator | Saturday 11 April 2026 00:52:42 +0000 (0:00:02.029) 0:00:30.527 ******** 2026-04-11 00:53:23.690805 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.690812 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.690820 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.690828 | orchestrator | 2026-04-11 00:53:23.690836 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-11 00:53:23.690852 | orchestrator | Saturday 11 April 2026 00:52:43 +0000 (0:00:00.380) 0:00:30.908 ******** 2026-04-11 00:53:23.690861 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.690869 | orchestrator | 2026-04-11 00:53:23.690877 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-11 00:53:23.690885 | orchestrator | Saturday 11 April 2026 00:52:43 +0000 (0:00:00.089) 0:00:30.998 ******** 2026-04-11 00:53:23.690893 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.690901 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.690909 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.690917 | orchestrator | 2026-04-11 00:53:23.690929 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-11 00:53:23.690943 | orchestrator | Saturday 11 April 2026 00:52:43 +0000 (0:00:00.266) 0:00:31.264 ******** 2026-04-11 00:53:23.690976 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.690984 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.690998 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691011 | orchestrator | 2026-04-11 00:53:23.691028 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-11 00:53:23.691047 | orchestrator | Saturday 11 April 2026 00:52:43 +0000 (0:00:00.272) 0:00:31.537 ******** 2026-04-11 00:53:23.691060 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691073 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691085 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691099 | orchestrator | 2026-04-11 00:53:23.691110 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-11 00:53:23.691121 | orchestrator | Saturday 11 April 2026 00:52:44 +0000 (0:00:00.267) 0:00:31.805 ******** 2026-04-11 00:53:23.691134 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691146 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691159 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691171 | orchestrator | 2026-04-11 00:53:23.691184 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-11 00:53:23.691196 | orchestrator | Saturday 11 April 2026 00:52:44 +0000 (0:00:00.383) 0:00:32.188 ******** 2026-04-11 00:53:23.691204 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691212 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691220 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691228 | orchestrator | 2026-04-11 00:53:23.691235 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-11 00:53:23.691243 | orchestrator | Saturday 11 April 2026 00:52:44 +0000 (0:00:00.259) 0:00:32.447 ******** 2026-04-11 00:53:23.691251 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691259 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691266 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691274 | orchestrator | 2026-04-11 00:53:23.691286 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-11 00:53:23.691302 | orchestrator | Saturday 11 April 2026 00:52:45 +0000 (0:00:00.300) 0:00:32.748 ******** 2026-04-11 00:53:23.691321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 00:53:23.691333 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 00:53:23.691345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 00:53:23.691359 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691373 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 00:53:23.691384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 00:53:23.691392 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 00:53:23.691402 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691415 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 00:53:23.691426 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 00:53:23.691447 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 00:53:23.691462 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691474 | orchestrator | 2026-04-11 00:53:23.691487 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-11 00:53:23.691499 | orchestrator | Saturday 11 April 2026 00:52:45 +0000 (0:00:00.315) 0:00:33.064 ******** 2026-04-11 00:53:23.691511 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691524 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691534 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691545 | orchestrator | 2026-04-11 00:53:23.691559 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-11 00:53:23.691572 | orchestrator | Saturday 11 April 2026 00:52:45 +0000 (0:00:00.402) 0:00:33.466 ******** 2026-04-11 00:53:23.691597 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691610 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691623 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691636 | orchestrator | 2026-04-11 00:53:23.691649 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-11 00:53:23.691662 | orchestrator | Saturday 11 April 2026 00:52:46 +0000 (0:00:00.293) 0:00:33.760 ******** 2026-04-11 00:53:23.691671 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691678 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691686 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691694 | orchestrator | 2026-04-11 00:53:23.691702 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-11 00:53:23.691710 | orchestrator | Saturday 11 April 2026 00:52:46 +0000 (0:00:00.264) 0:00:34.024 ******** 2026-04-11 00:53:23.691718 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691726 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691734 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691742 | orchestrator | 2026-04-11 00:53:23.691750 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-11 00:53:23.691758 | orchestrator | Saturday 11 April 2026 00:52:46 +0000 (0:00:00.247) 0:00:34.272 ******** 2026-04-11 00:53:23.691765 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691773 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691781 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691789 | orchestrator | 2026-04-11 00:53:23.691797 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-11 00:53:23.691813 | orchestrator | Saturday 11 April 2026 00:52:47 +0000 (0:00:00.411) 0:00:34.684 ******** 2026-04-11 00:53:23.691822 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691830 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691838 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691845 | orchestrator | 2026-04-11 00:53:23.691853 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-11 00:53:23.691861 | orchestrator | Saturday 11 April 2026 00:52:47 +0000 (0:00:00.248) 0:00:34.933 ******** 2026-04-11 00:53:23.691869 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691877 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691891 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.691899 | orchestrator | 2026-04-11 00:53:23.691907 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-11 00:53:23.691915 | orchestrator | Saturday 11 April 2026 00:52:47 +0000 (0:00:00.271) 0:00:35.204 ******** 2026-04-11 00:53:23.691923 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.691931 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.691939 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692004 | orchestrator | 2026-04-11 00:53:23.692015 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-11 00:53:23.692022 | orchestrator | Saturday 11 April 2026 00:52:47 +0000 (0:00:00.262) 0:00:35.467 ******** 2026-04-11 00:53:23.692032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.692049 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.692079 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.692102 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692110 | orchestrator | 2026-04-11 00:53:23.692118 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-11 00:53:23.692126 | orchestrator | Saturday 11 April 2026 00:52:49 +0000 (0:00:01.833) 0:00:37.301 ******** 2026-04-11 00:53:23.692134 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692142 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692150 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692157 | orchestrator | 2026-04-11 00:53:23.692165 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-11 00:53:23.692173 | orchestrator | Saturday 11 April 2026 00:52:50 +0000 (0:00:00.418) 0:00:37.719 ******** 2026-04-11 00:53:23.692192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.692201 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.692224 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 00:53:23.692248 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692256 | orchestrator | 2026-04-11 00:53:23.692264 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-11 00:53:23.692280 | orchestrator | Saturday 11 April 2026 00:52:51 +0000 (0:00:01.830) 0:00:39.550 ******** 2026-04-11 00:53:23.692288 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692296 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692304 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692312 | orchestrator | 2026-04-11 00:53:23.692320 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-11 00:53:23.692327 | orchestrator | Saturday 11 April 2026 00:52:52 +0000 (0:00:00.296) 0:00:39.846 ******** 2026-04-11 00:53:23.692334 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692341 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692352 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692359 | orchestrator | 2026-04-11 00:53:23.692366 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-11 00:53:23.692373 | orchestrator | Saturday 11 April 2026 00:52:52 +0000 (0:00:00.278) 0:00:40.125 ******** 2026-04-11 00:53:23.692379 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692386 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692396 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692408 | orchestrator | 2026-04-11 00:53:23.692419 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-11 00:53:23.692430 | orchestrator | Saturday 11 April 2026 00:52:52 +0000 (0:00:00.392) 0:00:40.518 ******** 2026-04-11 00:53:23.692441 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692452 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692464 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692475 | orchestrator | 2026-04-11 00:53:23.692486 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-11 00:53:23.692498 | orchestrator | Saturday 11 April 2026 00:52:53 +0000 (0:00:00.489) 0:00:41.007 ******** 2026-04-11 00:53:23.692510 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692522 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692535 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692547 | orchestrator | 2026-04-11 00:53:23.692558 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-11 00:53:23.692572 | orchestrator | Saturday 11 April 2026 00:52:53 +0000 (0:00:00.264) 0:00:41.271 ******** 2026-04-11 00:53:23.692582 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:53:23.692589 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:53:23.692595 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:53:23.692602 | orchestrator | 2026-04-11 00:53:23.692609 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-11 00:53:23.692616 | orchestrator | Saturday 11 April 2026 00:52:54 +0000 (0:00:00.885) 0:00:42.157 ******** 2026-04-11 00:53:23.692623 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:23.692629 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:23.692636 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:23.692643 | orchestrator | 2026-04-11 00:53:23.692649 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-11 00:53:23.692656 | orchestrator | Saturday 11 April 2026 00:52:54 +0000 (0:00:00.271) 0:00:42.428 ******** 2026-04-11 00:53:23.692663 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:23.692669 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:23.692676 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:23.692683 | orchestrator | 2026-04-11 00:53:23.692689 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-11 00:53:23.692696 | orchestrator | Saturday 11 April 2026 00:52:55 +0000 (0:00:00.252) 0:00:42.681 ******** 2026-04-11 00:53:23.692704 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-11 00:53:23.692712 | orchestrator | ...ignoring 2026-04-11 00:53:23.692719 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-11 00:53:23.692726 | orchestrator | ...ignoring 2026-04-11 00:53:23.692733 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-11 00:53:23.692740 | orchestrator | ...ignoring 2026-04-11 00:53:23.692747 | orchestrator | 2026-04-11 00:53:23.692754 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-11 00:53:23.692760 | orchestrator | Saturday 11 April 2026 00:53:05 +0000 (0:00:10.665) 0:00:53.346 ******** 2026-04-11 00:53:23.692767 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:23.692774 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:23.692780 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:23.692793 | orchestrator | 2026-04-11 00:53:23.692800 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-11 00:53:23.692807 | orchestrator | Saturday 11 April 2026 00:53:06 +0000 (0:00:00.375) 0:00:53.721 ******** 2026-04-11 00:53:23.692814 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692820 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692827 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692833 | orchestrator | 2026-04-11 00:53:23.692840 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-11 00:53:23.692847 | orchestrator | Saturday 11 April 2026 00:53:06 +0000 (0:00:00.270) 0:00:53.992 ******** 2026-04-11 00:53:23.692853 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692860 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692867 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692873 | orchestrator | 2026-04-11 00:53:23.692886 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-11 00:53:23.692893 | orchestrator | Saturday 11 April 2026 00:53:06 +0000 (0:00:00.301) 0:00:54.293 ******** 2026-04-11 00:53:23.692900 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.692906 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.692913 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.692920 | orchestrator | 2026-04-11 00:53:23.692926 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-11 00:53:23.692933 | orchestrator | Saturday 11 April 2026 00:53:06 +0000 (0:00:00.292) 0:00:54.586 ******** 2026-04-11 00:53:23.692939 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:23.692969 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:23.692977 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:23.692984 | orchestrator | 2026-04-11 00:53:23.692991 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-11 00:53:23.692998 | orchestrator | Saturday 11 April 2026 00:53:07 +0000 (0:00:00.286) 0:00:54.872 ******** 2026-04-11 00:53:23.693005 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:23.693011 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.693020 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.693031 | orchestrator | 2026-04-11 00:53:23.693042 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 00:53:23.693052 | orchestrator | Saturday 11 April 2026 00:53:07 +0000 (0:00:00.466) 0:00:55.339 ******** 2026-04-11 00:53:23.693061 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.693071 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.693081 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-11 00:53:23.693091 | orchestrator | 2026-04-11 00:53:23.693100 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-11 00:53:23.693110 | orchestrator | Saturday 11 April 2026 00:53:08 +0000 (0:00:00.354) 0:00:55.694 ******** 2026-04-11 00:53:23.693121 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_u0u1p44b/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_u0u1p44b/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_u0u1p44b/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/mariadb-server not found\")\\n'"} 2026-04-11 00:53:23.693141 | orchestrator | 2026-04-11 00:53:23.693151 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 00:53:23.693161 | orchestrator | Saturday 11 April 2026 00:53:12 +0000 (0:00:04.117) 0:00:59.812 ******** 2026-04-11 00:53:23.693172 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.693183 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.693196 | orchestrator | 2026-04-11 00:53:23.693207 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-11 00:53:23.693218 | orchestrator | Saturday 11 April 2026 00:53:12 +0000 (0:00:00.496) 0:01:00.308 ******** 2026-04-11 00:53:23.693230 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:23.693241 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:23.693252 | orchestrator | 2026-04-11 00:53:23.693270 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-11 00:53:23.693282 | orchestrator | Saturday 11 April 2026 00:53:12 +0000 (0:00:00.177) 0:01:00.486 ******** 2026-04-11 00:53:23.693299 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:53:23.693311 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:53:23.693322 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-11 00:53:23.693333 | orchestrator | 2026-04-11 00:53:23.693343 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-11 00:53:23.693354 | orchestrator | skipping: no hosts matched 2026-04-11 00:53:23.693363 | orchestrator | 2026-04-11 00:53:23.693381 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-11 00:53:23.693392 | orchestrator | 2026-04-11 00:53:23.693404 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-11 00:53:23.693416 | orchestrator | Saturday 11 April 2026 00:53:13 +0000 (0:00:00.205) 0:01:00.691 ******** 2026-04-11 00:53:23.693428 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_61iolqwv/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_61iolqwv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_61iolqwv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_61iolqwv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/mariadb-server not found\")\\n'"} 2026-04-11 00:53:23.693448 | orchestrator | 2026-04-11 00:53:23.693456 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:53:23.693463 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-11 00:53:23.693470 | orchestrator | testbed-node-0 : ok=20  changed=9  unreachable=0 failed=1  skipped=33  rescued=0 ignored=1  2026-04-11 00:53:23.693479 | orchestrator | testbed-node-1 : ok=16  changed=7  unreachable=0 failed=1  skipped=38  rescued=0 ignored=1  2026-04-11 00:53:23.693486 | orchestrator | testbed-node-2 : ok=16  changed=7  unreachable=0 failed=0 skipped=38  rescued=0 ignored=1  2026-04-11 00:53:23.693493 | orchestrator | 2026-04-11 00:53:23.693500 | orchestrator | 2026-04-11 00:53:23.693507 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:53:23.693514 | orchestrator | Saturday 11 April 2026 00:53:22 +0000 (0:00:09.505) 0:01:10.197 ******** 2026-04-11 00:53:23.693526 | orchestrator | =============================================================================== 2026-04-11 00:53:23.693533 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.67s 2026-04-11 00:53:23.693540 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 9.51s 2026-04-11 00:53:23.693546 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 4.12s 2026-04-11 00:53:23.693553 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.76s 2026-04-11 00:53:23.693564 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.43s 2026-04-11 00:53:23.693571 | orchestrator | Check MariaDB service --------------------------------------------------- 2.98s 2026-04-11 00:53:23.693578 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.82s 2026-04-11 00:53:23.693585 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.56s 2026-04-11 00:53:23.693591 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.47s 2026-04-11 00:53:23.693598 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.27s 2026-04-11 00:53:23.693605 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.08s 2026-04-11 00:53:23.693616 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.03s 2026-04-11 00:53:23.693623 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 1.83s 2026-04-11 00:53:23.693630 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 1.83s 2026-04-11 00:53:23.693636 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 1.55s 2026-04-11 00:53:23.693643 | orchestrator | mariadb : Copying over config.json files for mariabackup ---------------- 1.05s 2026-04-11 00:53:23.693649 | orchestrator | mariadb : Create MariaDB volume ----------------------------------------- 0.89s 2026-04-11 00:53:23.693656 | orchestrator | mariadb : Ensuring database backup config directory exists -------------- 0.70s 2026-04-11 00:53:23.693662 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.64s 2026-04-11 00:53:23.693669 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.50s 2026-04-11 00:53:26.722335 | orchestrator | 2026-04-11 00:53:26 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:26.724519 | orchestrator | 2026-04-11 00:53:26 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:26.725650 | orchestrator | 2026-04-11 00:53:26 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:26.725789 | orchestrator | 2026-04-11 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:29.759379 | orchestrator | 2026-04-11 00:53:29 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:29.759478 | orchestrator | 2026-04-11 00:53:29 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:29.761383 | orchestrator | 2026-04-11 00:53:29 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:29.761471 | orchestrator | 2026-04-11 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:32.792530 | orchestrator | 2026-04-11 00:53:32 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:32.794418 | orchestrator | 2026-04-11 00:53:32 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:32.796952 | orchestrator | 2026-04-11 00:53:32 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:32.797015 | orchestrator | 2026-04-11 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:35.835060 | orchestrator | 2026-04-11 00:53:35 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:35.837828 | orchestrator | 2026-04-11 00:53:35 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:35.839518 | orchestrator | 2026-04-11 00:53:35 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:35.839572 | orchestrator | 2026-04-11 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:38.879063 | orchestrator | 2026-04-11 00:53:38 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:38.879124 | orchestrator | 2026-04-11 00:53:38 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:38.880718 | orchestrator | 2026-04-11 00:53:38 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:38.880758 | orchestrator | 2026-04-11 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:41.908786 | orchestrator | 2026-04-11 00:53:41 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:41.909683 | orchestrator | 2026-04-11 00:53:41 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:41.910685 | orchestrator | 2026-04-11 00:53:41 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:41.911378 | orchestrator | 2026-04-11 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:44.930198 | orchestrator | 2026-04-11 00:53:44 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:44.930836 | orchestrator | 2026-04-11 00:53:44 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:44.931714 | orchestrator | 2026-04-11 00:53:44 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:44.931749 | orchestrator | 2026-04-11 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:47.960013 | orchestrator | 2026-04-11 00:53:47 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:47.961844 | orchestrator | 2026-04-11 00:53:47 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:47.963291 | orchestrator | 2026-04-11 00:53:47 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:47.963360 | orchestrator | 2026-04-11 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:50.990263 | orchestrator | 2026-04-11 00:53:50 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:50.992113 | orchestrator | 2026-04-11 00:53:50 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state STARTED 2026-04-11 00:53:50.994327 | orchestrator | 2026-04-11 00:53:50 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:50.994492 | orchestrator | 2026-04-11 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:54.038700 | orchestrator | 2026-04-11 00:53:54 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:54.040920 | orchestrator | 2026-04-11 00:53:54 | INFO  | Task 36f22b4b-124a-4965-9b82-5b3a87c2f12a is in state SUCCESS 2026-04-11 00:53:54.042299 | orchestrator | 2026-04-11 00:53:54.042344 | orchestrator | 2026-04-11 00:53:54.042353 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:53:54.042363 | orchestrator | 2026-04-11 00:53:54.042371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:53:54.042379 | orchestrator | Saturday 11 April 2026 00:53:25 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-04-11 00:53:54.042386 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.042396 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.042404 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.042412 | orchestrator | 2026-04-11 00:53:54.042420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:53:54.042467 | orchestrator | Saturday 11 April 2026 00:53:25 +0000 (0:00:00.261) 0:00:00.540 ******** 2026-04-11 00:53:54.042473 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-11 00:53:54.042479 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-11 00:53:54.042484 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-11 00:53:54.042489 | orchestrator | 2026-04-11 00:53:54.042494 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-11 00:53:54.042499 | orchestrator | 2026-04-11 00:53:54.042503 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 00:53:54.042508 | orchestrator | Saturday 11 April 2026 00:53:26 +0000 (0:00:00.254) 0:00:00.795 ******** 2026-04-11 00:53:54.042514 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:53:54.042519 | orchestrator | 2026-04-11 00:53:54.042524 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-11 00:53:54.042696 | orchestrator | Saturday 11 April 2026 00:53:26 +0000 (0:00:00.504) 0:00:01.299 ******** 2026-04-11 00:53:54.042727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.042754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.042777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.042786 | orchestrator | 2026-04-11 00:53:54.042794 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-11 00:53:54.042802 | orchestrator | Saturday 11 April 2026 00:53:28 +0000 (0:00:01.541) 0:00:02.841 ******** 2026-04-11 00:53:54.042810 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.042818 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.042826 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.042833 | orchestrator | 2026-04-11 00:53:54.042846 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 00:53:54.042851 | orchestrator | Saturday 11 April 2026 00:53:28 +0000 (0:00:00.239) 0:00:03.081 ******** 2026-04-11 00:53:54.042856 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 00:53:54.042861 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 00:53:54.042866 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 00:53:54.042907 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 00:53:54.042913 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 00:53:54.042925 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 00:53:54.042930 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-11 00:53:54.042934 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 00:53:54.042939 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 00:53:54.042944 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 00:53:54.042948 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 00:53:54.042953 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 00:53:54.042957 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 00:53:54.042962 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 00:53:54.042966 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-11 00:53:54.042971 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 00:53:54.042975 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 00:53:54.042980 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 00:53:54.042985 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 00:53:54.042989 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 00:53:54.042994 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 00:53:54.042998 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 00:53:54.043003 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-11 00:53:54.043010 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 00:53:54.043019 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-11 00:53:54.043028 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-11 00:53:54.043039 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-11 00:53:54.043047 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-11 00:53:54.043072 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-11 00:53:54.043081 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-11 00:53:54.043086 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-11 00:53:54.043135 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-11 00:53:54.043142 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-11 00:53:54.043148 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-11 00:53:54.043158 | orchestrator | 2026-04-11 00:53:54.043162 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043167 | orchestrator | Saturday 11 April 2026 00:53:29 +0000 (0:00:00.637) 0:00:03.718 ******** 2026-04-11 00:53:54.043172 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043186 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043196 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043200 | orchestrator | 2026-04-11 00:53:54.043205 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043210 | orchestrator | Saturday 11 April 2026 00:53:29 +0000 (0:00:00.380) 0:00:04.098 ******** 2026-04-11 00:53:54.043214 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043219 | orchestrator | 2026-04-11 00:53:54.043224 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043228 | orchestrator | Saturday 11 April 2026 00:53:29 +0000 (0:00:00.106) 0:00:04.205 ******** 2026-04-11 00:53:54.043233 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043237 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043242 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043247 | orchestrator | 2026-04-11 00:53:54.043251 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043256 | orchestrator | Saturday 11 April 2026 00:53:29 +0000 (0:00:00.246) 0:00:04.452 ******** 2026-04-11 00:53:54.043260 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043265 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043270 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043274 | orchestrator | 2026-04-11 00:53:54.043279 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043284 | orchestrator | Saturday 11 April 2026 00:53:30 +0000 (0:00:00.239) 0:00:04.691 ******** 2026-04-11 00:53:54.043288 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043293 | orchestrator | 2026-04-11 00:53:54.043297 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043302 | orchestrator | Saturday 11 April 2026 00:53:30 +0000 (0:00:00.094) 0:00:04.786 ******** 2026-04-11 00:53:54.043306 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043311 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043316 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043320 | orchestrator | 2026-04-11 00:53:54.043325 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043329 | orchestrator | Saturday 11 April 2026 00:53:30 +0000 (0:00:00.415) 0:00:05.201 ******** 2026-04-11 00:53:54.043334 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043339 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043343 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043348 | orchestrator | 2026-04-11 00:53:54.043353 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043357 | orchestrator | Saturday 11 April 2026 00:53:30 +0000 (0:00:00.327) 0:00:05.528 ******** 2026-04-11 00:53:54.043362 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043366 | orchestrator | 2026-04-11 00:53:54.043371 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043376 | orchestrator | Saturday 11 April 2026 00:53:31 +0000 (0:00:00.109) 0:00:05.638 ******** 2026-04-11 00:53:54.043380 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043385 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043390 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043394 | orchestrator | 2026-04-11 00:53:54.043399 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043404 | orchestrator | Saturday 11 April 2026 00:53:31 +0000 (0:00:00.248) 0:00:05.887 ******** 2026-04-11 00:53:54.043408 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043413 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043417 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043426 | orchestrator | 2026-04-11 00:53:54.043431 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043435 | orchestrator | Saturday 11 April 2026 00:53:31 +0000 (0:00:00.267) 0:00:06.154 ******** 2026-04-11 00:53:54.043440 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043444 | orchestrator | 2026-04-11 00:53:54.043449 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043457 | orchestrator | Saturday 11 April 2026 00:53:31 +0000 (0:00:00.100) 0:00:06.255 ******** 2026-04-11 00:53:54.043462 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043466 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043471 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043475 | orchestrator | 2026-04-11 00:53:54.043480 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043485 | orchestrator | Saturday 11 April 2026 00:53:32 +0000 (0:00:00.386) 0:00:06.641 ******** 2026-04-11 00:53:54.043489 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043494 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043498 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043503 | orchestrator | 2026-04-11 00:53:54.043507 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043512 | orchestrator | Saturday 11 April 2026 00:53:32 +0000 (0:00:00.261) 0:00:06.903 ******** 2026-04-11 00:53:54.043516 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043521 | orchestrator | 2026-04-11 00:53:54.043526 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043530 | orchestrator | Saturday 11 April 2026 00:53:32 +0000 (0:00:00.119) 0:00:07.022 ******** 2026-04-11 00:53:54.043535 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043539 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043544 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043549 | orchestrator | 2026-04-11 00:53:54.043553 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043558 | orchestrator | Saturday 11 April 2026 00:53:32 +0000 (0:00:00.244) 0:00:07.267 ******** 2026-04-11 00:53:54.043562 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043567 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043572 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043576 | orchestrator | 2026-04-11 00:53:54.043581 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043585 | orchestrator | Saturday 11 April 2026 00:53:32 +0000 (0:00:00.257) 0:00:07.525 ******** 2026-04-11 00:53:54.043590 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043594 | orchestrator | 2026-04-11 00:53:54.043599 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043603 | orchestrator | Saturday 11 April 2026 00:53:33 +0000 (0:00:00.100) 0:00:07.626 ******** 2026-04-11 00:53:54.043611 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043616 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043621 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043625 | orchestrator | 2026-04-11 00:53:54.043630 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043634 | orchestrator | Saturday 11 April 2026 00:53:33 +0000 (0:00:00.382) 0:00:08.009 ******** 2026-04-11 00:53:54.043639 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043644 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043648 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043653 | orchestrator | 2026-04-11 00:53:54.043658 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043662 | orchestrator | Saturday 11 April 2026 00:53:33 +0000 (0:00:00.270) 0:00:08.279 ******** 2026-04-11 00:53:54.043667 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043671 | orchestrator | 2026-04-11 00:53:54.043676 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043684 | orchestrator | Saturday 11 April 2026 00:53:33 +0000 (0:00:00.192) 0:00:08.472 ******** 2026-04-11 00:53:54.043689 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043694 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043698 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043703 | orchestrator | 2026-04-11 00:53:54.043707 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043712 | orchestrator | Saturday 11 April 2026 00:53:34 +0000 (0:00:00.305) 0:00:08.778 ******** 2026-04-11 00:53:54.043716 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043721 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043726 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043730 | orchestrator | 2026-04-11 00:53:54.043735 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043739 | orchestrator | Saturday 11 April 2026 00:53:34 +0000 (0:00:00.251) 0:00:09.030 ******** 2026-04-11 00:53:54.043744 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043749 | orchestrator | 2026-04-11 00:53:54.043753 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043758 | orchestrator | Saturday 11 April 2026 00:53:34 +0000 (0:00:00.233) 0:00:09.263 ******** 2026-04-11 00:53:54.043762 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043767 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043771 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043776 | orchestrator | 2026-04-11 00:53:54.043782 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043787 | orchestrator | Saturday 11 April 2026 00:53:34 +0000 (0:00:00.243) 0:00:09.507 ******** 2026-04-11 00:53:54.043793 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043798 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043804 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043809 | orchestrator | 2026-04-11 00:53:54.043814 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043820 | orchestrator | Saturday 11 April 2026 00:53:35 +0000 (0:00:00.245) 0:00:09.753 ******** 2026-04-11 00:53:54.043826 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043831 | orchestrator | 2026-04-11 00:53:54.043836 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043841 | orchestrator | Saturday 11 April 2026 00:53:35 +0000 (0:00:00.098) 0:00:09.851 ******** 2026-04-11 00:53:54.043847 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043852 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043857 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043863 | orchestrator | 2026-04-11 00:53:54.043893 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 00:53:54.043899 | orchestrator | Saturday 11 April 2026 00:53:35 +0000 (0:00:00.362) 0:00:10.214 ******** 2026-04-11 00:53:54.043904 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:53:54.043910 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:53:54.043915 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:53:54.043921 | orchestrator | 2026-04-11 00:53:54.043929 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 00:53:54.043935 | orchestrator | Saturday 11 April 2026 00:53:36 +0000 (0:00:00.404) 0:00:10.619 ******** 2026-04-11 00:53:54.043940 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043945 | orchestrator | 2026-04-11 00:53:54.043950 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 00:53:54.043956 | orchestrator | Saturday 11 April 2026 00:53:36 +0000 (0:00:00.098) 0:00:10.717 ******** 2026-04-11 00:53:54.043961 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.043966 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.043972 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.043977 | orchestrator | 2026-04-11 00:53:54.043983 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-11 00:53:54.043988 | orchestrator | Saturday 11 April 2026 00:53:36 +0000 (0:00:00.252) 0:00:10.969 ******** 2026-04-11 00:53:54.043997 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:53:54.044003 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:53:54.044008 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:53:54.044014 | orchestrator | 2026-04-11 00:53:54.044021 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-11 00:53:54.044029 | orchestrator | Saturday 11 April 2026 00:53:37 +0000 (0:00:01.479) 0:00:12.449 ******** 2026-04-11 00:53:54.044037 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 00:53:54.044045 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 00:53:54.044052 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 00:53:54.044059 | orchestrator | 2026-04-11 00:53:54.044067 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-11 00:53:54.044074 | orchestrator | Saturday 11 April 2026 00:53:39 +0000 (0:00:01.538) 0:00:13.988 ******** 2026-04-11 00:53:54.044081 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 00:53:54.044094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 00:53:54.044102 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 00:53:54.044109 | orchestrator | 2026-04-11 00:53:54.044116 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-11 00:53:54.044123 | orchestrator | Saturday 11 April 2026 00:53:41 +0000 (0:00:02.114) 0:00:16.102 ******** 2026-04-11 00:53:54.044130 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 00:53:54.044138 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 00:53:54.044145 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 00:53:54.044152 | orchestrator | 2026-04-11 00:53:54.044158 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-11 00:53:54.044165 | orchestrator | Saturday 11 April 2026 00:53:42 +0000 (0:00:01.462) 0:00:17.564 ******** 2026-04-11 00:53:54.044172 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.044178 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.044186 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.044193 | orchestrator | 2026-04-11 00:53:54.044199 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-11 00:53:54.044206 | orchestrator | Saturday 11 April 2026 00:53:43 +0000 (0:00:00.262) 0:00:17.827 ******** 2026-04-11 00:53:54.044213 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.044236 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.044244 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.044251 | orchestrator | 2026-04-11 00:53:54.044258 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 00:53:54.044265 | orchestrator | Saturday 11 April 2026 00:53:43 +0000 (0:00:00.281) 0:00:18.109 ******** 2026-04-11 00:53:54.044276 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:53:54.044283 | orchestrator | 2026-04-11 00:53:54.044290 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-11 00:53:54.044298 | orchestrator | Saturday 11 April 2026 00:53:44 +0000 (0:00:00.765) 0:00:18.874 ******** 2026-04-11 00:53:54.044313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.044338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.044367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.044377 | orchestrator | 2026-04-11 00:53:54.044384 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-11 00:53:54.044391 | orchestrator | Saturday 11 April 2026 00:53:45 +0000 (0:00:01.298) 0:00:20.173 ******** 2026-04-11 00:53:54.044398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044415 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.044430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044438 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.044449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044461 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.044469 | orchestrator | 2026-04-11 00:53:54.044476 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-11 00:53:54.044483 | orchestrator | Saturday 11 April 2026 00:53:46 +0000 (0:00:00.690) 0:00:20.863 ******** 2026-04-11 00:53:54.044496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044505 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.044517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044530 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.044544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044558 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.044565 | orchestrator | 2026-04-11 00:53:54.044572 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-11 00:53:54.044580 | orchestrator | Saturday 11 April 2026 00:53:47 +0000 (0:00:01.065) 0:00:21.928 ******** 2026-04-11 00:53:54.044597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.044607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.044631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 00:53:54.044640 | orchestrator | 2026-04-11 00:53:54.044648 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-11 00:53:54.044656 | orchestrator | Saturday 11 April 2026 00:53:48 +0000 (0:00:01.176) 0:00:23.105 ******** 2026-04-11 00:53:54.044663 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:53:54.044671 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:53:54.044679 | orchestrator | } 2026-04-11 00:53:54.044687 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:53:54.044695 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:53:54.044704 | orchestrator | } 2026-04-11 00:53:54.044711 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:53:54.044724 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:53:54.044732 | orchestrator | } 2026-04-11 00:53:54.044737 | orchestrator | 2026-04-11 00:53:54.044741 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:53:54.044746 | orchestrator | Saturday 11 April 2026 00:53:48 +0000 (0:00:00.307) 0:00:23.413 ******** 2026-04-11 00:53:54.044754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044760 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.044770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044779 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.044790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 00:53:54.044799 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.044806 | orchestrator | 2026-04-11 00:53:54.044813 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 00:53:54.044821 | orchestrator | Saturday 11 April 2026 00:53:49 +0000 (0:00:00.972) 0:00:24.385 ******** 2026-04-11 00:53:54.044832 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:53:54.044838 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:53:54.044845 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:53:54.044852 | orchestrator | 2026-04-11 00:53:54.044859 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 00:53:54.044866 | orchestrator | Saturday 11 April 2026 00:53:50 +0000 (0:00:00.276) 0:00:24.662 ******** 2026-04-11 00:53:54.044955 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:53:54.044970 | orchestrator | 2026-04-11 00:53:54.044977 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-11 00:53:54.044985 | orchestrator | Saturday 11 April 2026 00:53:50 +0000 (0:00:00.495) 0:00:25.157 ******** 2026-04-11 00:53:54.044992 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:53:54.044999 | orchestrator | 2026-04-11 00:53:54.045006 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:53:54.045014 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=1  skipped=26  rescued=0 ignored=0 2026-04-11 00:53:54.045023 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-11 00:53:54.045031 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-11 00:53:54.045038 | orchestrator | 2026-04-11 00:53:54.045046 | orchestrator | 2026-04-11 00:53:54.045054 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:53:54.045061 | orchestrator | Saturday 11 April 2026 00:53:51 +0000 (0:00:00.841) 0:00:25.999 ******** 2026-04-11 00:53:54.045069 | orchestrator | =============================================================================== 2026-04-11 00:53:54.045077 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.11s 2026-04-11 00:53:54.045084 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.54s 2026-04-11 00:53:54.045092 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.54s 2026-04-11 00:53:54.045100 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.48s 2026-04-11 00:53:54.045107 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.46s 2026-04-11 00:53:54.045114 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.30s 2026-04-11 00:53:54.045122 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.18s 2026-04-11 00:53:54.045127 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.07s 2026-04-11 00:53:54.045132 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.97s 2026-04-11 00:53:54.045136 | orchestrator | horizon : Creating Horizon database ------------------------------------- 0.84s 2026-04-11 00:53:54.045141 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-04-11 00:53:54.045145 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.69s 2026-04-11 00:53:54.045150 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2026-04-11 00:53:54.045159 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-04-11 00:53:54.045164 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-04-11 00:53:54.045168 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.42s 2026-04-11 00:53:54.045173 | orchestrator | horizon : Update policy file name --------------------------------------- 0.40s 2026-04-11 00:53:54.045178 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.39s 2026-04-11 00:53:54.045182 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.38s 2026-04-11 00:53:54.045187 | orchestrator | horizon : Update policy file name --------------------------------------- 0.38s 2026-04-11 00:53:54.045191 | orchestrator | 2026-04-11 00:53:54 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:54.045196 | orchestrator | 2026-04-11 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:53:57.083411 | orchestrator | 2026-04-11 00:53:57 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:53:57.083808 | orchestrator | 2026-04-11 00:53:57 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:53:57.083831 | orchestrator | 2026-04-11 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:00.115618 | orchestrator | 2026-04-11 00:54:00 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:00.116682 | orchestrator | 2026-04-11 00:54:00 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:54:00.116752 | orchestrator | 2026-04-11 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:03.151277 | orchestrator | 2026-04-11 00:54:03 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:03.151837 | orchestrator | 2026-04-11 00:54:03 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:54:03.151920 | orchestrator | 2026-04-11 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:06.190477 | orchestrator | 2026-04-11 00:54:06 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:06.193483 | orchestrator | 2026-04-11 00:54:06 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state STARTED 2026-04-11 00:54:06.193545 | orchestrator | 2026-04-11 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:09.223630 | orchestrator | 2026-04-11 00:54:09 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:09.224003 | orchestrator | 2026-04-11 00:54:09 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:09.226679 | orchestrator | 2026-04-11 00:54:09 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:09.227234 | orchestrator | 2026-04-11 00:54:09 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:09.228041 | orchestrator | 2026-04-11 00:54:09 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:09.229824 | orchestrator | 2026-04-11 00:54:09 | INFO  | Task 07df6e07-cd49-4092-9d07-850fde14147f is in state SUCCESS 2026-04-11 00:54:09.232347 | orchestrator | 2026-04-11 00:54:09.232389 | orchestrator | 2026-04-11 00:54:09.232395 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:54:09.232401 | orchestrator | 2026-04-11 00:54:09.232406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:54:09.232412 | orchestrator | Saturday 11 April 2026 00:53:25 +0000 (0:00:00.275) 0:00:00.275 ******** 2026-04-11 00:54:09.232417 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:54:09.232423 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:54:09.232428 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:54:09.232433 | orchestrator | 2026-04-11 00:54:09.232438 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:54:09.232444 | orchestrator | Saturday 11 April 2026 00:53:25 +0000 (0:00:00.247) 0:00:00.523 ******** 2026-04-11 00:54:09.232449 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-11 00:54:09.232455 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-11 00:54:09.232460 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-11 00:54:09.232465 | orchestrator | 2026-04-11 00:54:09.232481 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-11 00:54:09.232497 | orchestrator | 2026-04-11 00:54:09.232504 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 00:54:09.232512 | orchestrator | Saturday 11 April 2026 00:53:26 +0000 (0:00:00.248) 0:00:00.771 ******** 2026-04-11 00:54:09.232520 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:54:09.232554 | orchestrator | 2026-04-11 00:54:09.232561 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-11 00:54:09.232568 | orchestrator | Saturday 11 April 2026 00:53:26 +0000 (0:00:00.533) 0:00:01.305 ******** 2026-04-11 00:54:09.232591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.232599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.232615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.232621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232663 | orchestrator | 2026-04-11 00:54:09.232667 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-11 00:54:09.232680 | orchestrator | Saturday 11 April 2026 00:53:28 +0000 (0:00:02.083) 0:00:03.388 ******** 2026-04-11 00:54:09.232685 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.232691 | orchestrator | 2026-04-11 00:54:09.232698 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-11 00:54:09.232704 | orchestrator | Saturday 11 April 2026 00:53:28 +0000 (0:00:00.095) 0:00:03.484 ******** 2026-04-11 00:54:09.232716 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.232723 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.232730 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.232737 | orchestrator | 2026-04-11 00:54:09.232744 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-11 00:54:09.232751 | orchestrator | Saturday 11 April 2026 00:53:29 +0000 (0:00:00.239) 0:00:03.723 ******** 2026-04-11 00:54:09.232757 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 00:54:09.232766 | orchestrator | 2026-04-11 00:54:09.232771 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 00:54:09.232775 | orchestrator | Saturday 11 April 2026 00:53:29 +0000 (0:00:00.755) 0:00:04.479 ******** 2026-04-11 00:54:09.232780 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:54:09.232785 | orchestrator | 2026-04-11 00:54:09.232789 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-11 00:54:09.232793 | orchestrator | Saturday 11 April 2026 00:53:30 +0000 (0:00:00.580) 0:00:05.060 ******** 2026-04-11 00:54:09.232801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.232806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.232816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.232825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.232882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233165 | orchestrator | 2026-04-11 00:54:09.233175 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-11 00:54:09.233180 | orchestrator | Saturday 11 April 2026 00:53:33 +0000 (0:00:03.157) 0:00:08.217 ******** 2026-04-11 00:54:09.233191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233209 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.233214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233239 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.233248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233262 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.233271 | orchestrator | 2026-04-11 00:54:09.233276 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-11 00:54:09.233280 | orchestrator | Saturday 11 April 2026 00:53:34 +0000 (0:00:00.585) 0:00:08.803 ******** 2026-04-11 00:54:09.233289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233306 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.233311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233336 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.233340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233357 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.233362 | orchestrator | 2026-04-11 00:54:09.233366 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-11 00:54:09.233371 | orchestrator | Saturday 11 April 2026 00:53:35 +0000 (0:00:00.855) 0:00:09.658 ******** 2026-04-11 00:54:09.233376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.233388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.233396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.233401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233439 | orchestrator | 2026-04-11 00:54:09.233444 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-11 00:54:09.233448 | orchestrator | Saturday 11 April 2026 00:53:38 +0000 (0:00:03.072) 0:00:12.731 ******** 2026-04-11 00:54:09.233453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.233462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.233475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.233488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.233524 | orchestrator | 2026-04-11 00:54:09.233530 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-11 00:54:09.233537 | orchestrator | Saturday 11 April 2026 00:53:42 +0000 (0:00:04.620) 0:00:17.351 ******** 2026-04-11 00:54:09.233544 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:54:09.233551 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:54:09.233558 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:54:09.233565 | orchestrator | 2026-04-11 00:54:09.233572 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-11 00:54:09.233578 | orchestrator | Saturday 11 April 2026 00:53:43 +0000 (0:00:01.224) 0:00:18.577 ******** 2026-04-11 00:54:09.233585 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.233591 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.233598 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.233605 | orchestrator | 2026-04-11 00:54:09.233612 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-11 00:54:09.233625 | orchestrator | Saturday 11 April 2026 00:53:44 +0000 (0:00:00.708) 0:00:19.286 ******** 2026-04-11 00:54:09.233630 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.233634 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.233638 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.233643 | orchestrator | 2026-04-11 00:54:09.233647 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-11 00:54:09.233651 | orchestrator | Saturday 11 April 2026 00:53:45 +0000 (0:00:00.409) 0:00:19.695 ******** 2026-04-11 00:54:09.233661 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.233665 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.233669 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.233674 | orchestrator | 2026-04-11 00:54:09.233678 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-11 00:54:09.233683 | orchestrator | Saturday 11 April 2026 00:53:45 +0000 (0:00:00.246) 0:00:19.942 ******** 2026-04-11 00:54:09.233690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233718 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.233726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233753 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.233758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.233818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.233831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.233865 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.233874 | orchestrator | 2026-04-11 00:54:09.233881 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 00:54:09.233888 | orchestrator | Saturday 11 April 2026 00:53:45 +0000 (0:00:00.576) 0:00:20.519 ******** 2026-04-11 00:54:09.233900 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.233907 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.233914 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.233920 | orchestrator | 2026-04-11 00:54:09.233926 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-11 00:54:09.233937 | orchestrator | Saturday 11 April 2026 00:53:46 +0000 (0:00:00.283) 0:00:20.802 ******** 2026-04-11 00:54:09.233943 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 00:54:09.233951 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 00:54:09.233958 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 00:54:09.233965 | orchestrator | 2026-04-11 00:54:09.233971 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-11 00:54:09.233979 | orchestrator | Saturday 11 April 2026 00:53:47 +0000 (0:00:01.764) 0:00:22.567 ******** 2026-04-11 00:54:09.233986 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 00:54:09.233993 | orchestrator | 2026-04-11 00:54:09.234000 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-11 00:54:09.234007 | orchestrator | Saturday 11 April 2026 00:53:48 +0000 (0:00:00.966) 0:00:23.533 ******** 2026-04-11 00:54:09.234086 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.234094 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.234098 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.234103 | orchestrator | 2026-04-11 00:54:09.234107 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-11 00:54:09.234111 | orchestrator | Saturday 11 April 2026 00:53:49 +0000 (0:00:00.533) 0:00:24.066 ******** 2026-04-11 00:54:09.234116 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 00:54:09.234120 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 00:54:09.234124 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 00:54:09.234129 | orchestrator | 2026-04-11 00:54:09.234133 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-11 00:54:09.234138 | orchestrator | Saturday 11 April 2026 00:53:50 +0000 (0:00:00.973) 0:00:25.040 ******** 2026-04-11 00:54:09.234142 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:54:09.234147 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:54:09.234152 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:54:09.234156 | orchestrator | 2026-04-11 00:54:09.234161 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-11 00:54:09.234165 | orchestrator | Saturday 11 April 2026 00:53:50 +0000 (0:00:00.245) 0:00:25.285 ******** 2026-04-11 00:54:09.234170 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 00:54:09.234174 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 00:54:09.234178 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 00:54:09.234183 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 00:54:09.234187 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 00:54:09.234191 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 00:54:09.234196 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 00:54:09.234200 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 00:54:09.234205 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 00:54:09.234209 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 00:54:09.234218 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 00:54:09.234230 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 00:54:09.234235 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 00:54:09.234239 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 00:54:09.234244 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 00:54:09.234248 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 00:54:09.234253 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 00:54:09.234257 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 00:54:09.234261 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 00:54:09.234266 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 00:54:09.234270 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 00:54:09.234274 | orchestrator | 2026-04-11 00:54:09.234279 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-11 00:54:09.234283 | orchestrator | Saturday 11 April 2026 00:53:59 +0000 (0:00:08.343) 0:00:33.628 ******** 2026-04-11 00:54:09.234287 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 00:54:09.234292 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 00:54:09.234299 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 00:54:09.234304 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 00:54:09.234308 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 00:54:09.234313 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 00:54:09.234317 | orchestrator | 2026-04-11 00:54:09.234321 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-11 00:54:09.234326 | orchestrator | Saturday 11 April 2026 00:54:01 +0000 (0:00:02.510) 0:00:36.139 ******** 2026-04-11 00:54:09.234331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.234336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.234351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 00:54:09.234363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.234372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.234384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 00:54:09.234395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.234412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.234421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 00:54:09.234430 | orchestrator | 2026-04-11 00:54:09.234437 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-11 00:54:09.234454 | orchestrator | Saturday 11 April 2026 00:54:03 +0000 (0:00:02.322) 0:00:38.462 ******** 2026-04-11 00:54:09.234461 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 00:54:09.234468 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:54:09.234475 | orchestrator | } 2026-04-11 00:54:09.234482 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 00:54:09.234487 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:54:09.234491 | orchestrator | } 2026-04-11 00:54:09.234496 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 00:54:09.234500 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 00:54:09.234504 | orchestrator | } 2026-04-11 00:54:09.234509 | orchestrator | 2026-04-11 00:54:09.234513 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 00:54:09.234517 | orchestrator | Saturday 11 April 2026 00:54:04 +0000 (0:00:00.308) 0:00:38.770 ******** 2026-04-11 00:54:09.234525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.234531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.234540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.234544 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.234555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.234563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.234568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.234573 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.234577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 00:54:09.234586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 00:54:09.234595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 00:54:09.234600 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.234604 | orchestrator | 2026-04-11 00:54:09.234609 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 00:54:09.234613 | orchestrator | Saturday 11 April 2026 00:54:04 +0000 (0:00:00.763) 0:00:39.533 ******** 2026-04-11 00:54:09.234617 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:54:09.234622 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:54:09.234626 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:54:09.234630 | orchestrator | 2026-04-11 00:54:09.234635 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-11 00:54:09.234639 | orchestrator | Saturday 11 April 2026 00:54:05 +0000 (0:00:00.244) 0:00:39.777 ******** 2026-04-11 00:54:09.234644 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:54:09.234648 | orchestrator | 2026-04-11 00:54:09.234652 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:54:09.234658 | orchestrator | testbed-node-0 : ok=18  changed=10  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-11 00:54:09.234664 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-11 00:54:09.234672 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-11 00:54:09.234677 | orchestrator | 2026-04-11 00:54:09.234681 | orchestrator | 2026-04-11 00:54:09.234687 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:54:09.234694 | orchestrator | Saturday 11 April 2026 00:54:05 +0000 (0:00:00.741) 0:00:40.519 ******** 2026-04-11 00:54:09.234711 | orchestrator | =============================================================================== 2026-04-11 00:54:09.234718 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.34s 2026-04-11 00:54:09.234725 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.62s 2026-04-11 00:54:09.234733 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.16s 2026-04-11 00:54:09.234740 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.07s 2026-04-11 00:54:09.234747 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.51s 2026-04-11 00:54:09.234755 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.32s 2026-04-11 00:54:09.234762 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.08s 2026-04-11 00:54:09.234766 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.76s 2026-04-11 00:54:09.234771 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.23s 2026-04-11 00:54:09.234775 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 0.97s 2026-04-11 00:54:09.234779 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.97s 2026-04-11 00:54:09.234784 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.86s 2026-04-11 00:54:09.234788 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.76s 2026-04-11 00:54:09.234792 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.76s 2026-04-11 00:54:09.234797 | orchestrator | keystone : Creating keystone database ----------------------------------- 0.74s 2026-04-11 00:54:09.234801 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.71s 2026-04-11 00:54:09.234806 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.59s 2026-04-11 00:54:09.234810 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.58s 2026-04-11 00:54:09.234814 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.58s 2026-04-11 00:54:09.234819 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.53s 2026-04-11 00:54:09.234823 | orchestrator | 2026-04-11 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:12.252523 | orchestrator | 2026-04-11 00:54:12 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:12.255511 | orchestrator | 2026-04-11 00:54:12 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:12.257779 | orchestrator | 2026-04-11 00:54:12 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:12.260947 | orchestrator | 2026-04-11 00:54:12 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:12.264794 | orchestrator | 2026-04-11 00:54:12 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:12.264912 | orchestrator | 2026-04-11 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:15.311777 | orchestrator | 2026-04-11 00:54:15 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:15.311942 | orchestrator | 2026-04-11 00:54:15 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:15.312613 | orchestrator | 2026-04-11 00:54:15 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:15.315733 | orchestrator | 2026-04-11 00:54:15 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:15.318756 | orchestrator | 2026-04-11 00:54:15 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:15.318806 | orchestrator | 2026-04-11 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:18.362503 | orchestrator | 2026-04-11 00:54:18 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:18.365037 | orchestrator | 2026-04-11 00:54:18 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:18.367030 | orchestrator | 2026-04-11 00:54:18 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:18.368674 | orchestrator | 2026-04-11 00:54:18 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:18.371352 | orchestrator | 2026-04-11 00:54:18 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:18.371435 | orchestrator | 2026-04-11 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:21.416545 | orchestrator | 2026-04-11 00:54:21 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:21.418332 | orchestrator | 2026-04-11 00:54:21 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:21.419590 | orchestrator | 2026-04-11 00:54:21 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:21.421141 | orchestrator | 2026-04-11 00:54:21 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:21.422577 | orchestrator | 2026-04-11 00:54:21 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:21.422623 | orchestrator | 2026-04-11 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:24.470569 | orchestrator | 2026-04-11 00:54:24 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:24.473154 | orchestrator | 2026-04-11 00:54:24 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:24.473213 | orchestrator | 2026-04-11 00:54:24 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:24.474090 | orchestrator | 2026-04-11 00:54:24 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:24.475346 | orchestrator | 2026-04-11 00:54:24 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:24.475404 | orchestrator | 2026-04-11 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:27.517120 | orchestrator | 2026-04-11 00:54:27 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:27.517199 | orchestrator | 2026-04-11 00:54:27 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:27.518425 | orchestrator | 2026-04-11 00:54:27 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:27.519672 | orchestrator | 2026-04-11 00:54:27 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:27.521739 | orchestrator | 2026-04-11 00:54:27 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:27.521773 | orchestrator | 2026-04-11 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:30.563259 | orchestrator | 2026-04-11 00:54:30 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:30.564883 | orchestrator | 2026-04-11 00:54:30 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:30.566548 | orchestrator | 2026-04-11 00:54:30 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:30.567849 | orchestrator | 2026-04-11 00:54:30 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:30.569186 | orchestrator | 2026-04-11 00:54:30 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:30.569235 | orchestrator | 2026-04-11 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:33.613909 | orchestrator | 2026-04-11 00:54:33 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:33.614885 | orchestrator | 2026-04-11 00:54:33 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:33.614919 | orchestrator | 2026-04-11 00:54:33 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:33.615251 | orchestrator | 2026-04-11 00:54:33 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:33.616048 | orchestrator | 2026-04-11 00:54:33 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:33.616142 | orchestrator | 2026-04-11 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:36.658469 | orchestrator | 2026-04-11 00:54:36 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:36.659105 | orchestrator | 2026-04-11 00:54:36 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:36.659739 | orchestrator | 2026-04-11 00:54:36 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:36.660572 | orchestrator | 2026-04-11 00:54:36 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:36.661354 | orchestrator | 2026-04-11 00:54:36 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:36.661378 | orchestrator | 2026-04-11 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:39.718358 | orchestrator | 2026-04-11 00:54:39 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:39.721240 | orchestrator | 2026-04-11 00:54:39 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:39.723516 | orchestrator | 2026-04-11 00:54:39 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:39.725952 | orchestrator | 2026-04-11 00:54:39 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:39.727757 | orchestrator | 2026-04-11 00:54:39 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:39.727839 | orchestrator | 2026-04-11 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:42.772967 | orchestrator | 2026-04-11 00:54:42 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:42.774348 | orchestrator | 2026-04-11 00:54:42 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:42.776367 | orchestrator | 2026-04-11 00:54:42 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:42.780436 | orchestrator | 2026-04-11 00:54:42 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:42.780508 | orchestrator | 2026-04-11 00:54:42 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:42.780520 | orchestrator | 2026-04-11 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:45.838363 | orchestrator | 2026-04-11 00:54:45 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:45.838749 | orchestrator | 2026-04-11 00:54:45 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:45.842056 | orchestrator | 2026-04-11 00:54:45 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:45.846608 | orchestrator | 2026-04-11 00:54:45 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:45.851116 | orchestrator | 2026-04-11 00:54:45 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:45.851186 | orchestrator | 2026-04-11 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:48.896449 | orchestrator | 2026-04-11 00:54:48 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:48.898549 | orchestrator | 2026-04-11 00:54:48 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:48.900422 | orchestrator | 2026-04-11 00:54:48 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:48.902435 | orchestrator | 2026-04-11 00:54:48 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:48.904122 | orchestrator | 2026-04-11 00:54:48 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:48.904171 | orchestrator | 2026-04-11 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:51.959122 | orchestrator | 2026-04-11 00:54:51 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:51.962220 | orchestrator | 2026-04-11 00:54:51 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:51.964913 | orchestrator | 2026-04-11 00:54:51 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:51.967819 | orchestrator | 2026-04-11 00:54:51 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:51.970634 | orchestrator | 2026-04-11 00:54:51 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:51.970686 | orchestrator | 2026-04-11 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:55.017516 | orchestrator | 2026-04-11 00:54:55 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:55.018800 | orchestrator | 2026-04-11 00:54:55 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:55.021082 | orchestrator | 2026-04-11 00:54:55 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:55.023789 | orchestrator | 2026-04-11 00:54:55 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:55.026086 | orchestrator | 2026-04-11 00:54:55 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:55.026137 | orchestrator | 2026-04-11 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:54:58.072479 | orchestrator | 2026-04-11 00:54:58 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:54:58.074844 | orchestrator | 2026-04-11 00:54:58 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:54:58.075764 | orchestrator | 2026-04-11 00:54:58 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:54:58.077320 | orchestrator | 2026-04-11 00:54:58 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:54:58.078564 | orchestrator | 2026-04-11 00:54:58 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:54:58.078613 | orchestrator | 2026-04-11 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:01.127942 | orchestrator | 2026-04-11 00:55:01 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:01.129248 | orchestrator | 2026-04-11 00:55:01 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:55:01.131423 | orchestrator | 2026-04-11 00:55:01 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:01.133764 | orchestrator | 2026-04-11 00:55:01 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:55:01.135356 | orchestrator | 2026-04-11 00:55:01 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:55:01.135402 | orchestrator | 2026-04-11 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:04.182578 | orchestrator | 2026-04-11 00:55:04 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:04.185598 | orchestrator | 2026-04-11 00:55:04 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state STARTED 2026-04-11 00:55:04.187264 | orchestrator | 2026-04-11 00:55:04 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:04.188499 | orchestrator | 2026-04-11 00:55:04 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state STARTED 2026-04-11 00:55:04.189737 | orchestrator | 2026-04-11 00:55:04 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:55:04.189874 | orchestrator | 2026-04-11 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:07.238898 | orchestrator | 2026-04-11 00:55:07 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:07.240944 | orchestrator | 2026-04-11 00:55:07 | INFO  | Task d50b2317-c794-488a-97a9-c9dfa235f877 is in state SUCCESS 2026-04-11 00:55:07.243504 | orchestrator | 2026-04-11 00:55:07 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:07.245663 | orchestrator | 2026-04-11 00:55:07 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:07.247832 | orchestrator | 2026-04-11 00:55:07 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:07.249214 | orchestrator | 2026-04-11 00:55:07 | INFO  | Task 56bd46fd-9057-44ae-b2d7-0d7596f4f4e8 is in state SUCCESS 2026-04-11 00:55:07.250841 | orchestrator | 2026-04-11 00:55:07 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state STARTED 2026-04-11 00:55:07.250997 | orchestrator | 2026-04-11 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:10.297580 | orchestrator | 2026-04-11 00:55:10.297677 | orchestrator | 2026-04-11 00:55:10.297732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:55:10.297741 | orchestrator | 2026-04-11 00:55:10.297745 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:55:10.297750 | orchestrator | Saturday 11 April 2026 00:54:10 +0000 (0:00:00.352) 0:00:00.352 ******** 2026-04-11 00:55:10.297754 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:55:10.297761 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:55:10.297767 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:55:10.297774 | orchestrator | 2026-04-11 00:55:10.297780 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:55:10.297786 | orchestrator | Saturday 11 April 2026 00:54:10 +0000 (0:00:00.358) 0:00:00.710 ******** 2026-04-11 00:55:10.297794 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-11 00:55:10.297800 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-11 00:55:10.297807 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-11 00:55:10.297814 | orchestrator | 2026-04-11 00:55:10.297820 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-11 00:55:10.297845 | orchestrator | 2026-04-11 00:55:10.297863 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-11 00:55:10.297871 | orchestrator | Saturday 11 April 2026 00:54:10 +0000 (0:00:00.250) 0:00:00.961 ******** 2026-04-11 00:55:10.297877 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:55:10.297882 | orchestrator | 2026-04-11 00:55:10.297886 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-11 00:55:10.297890 | orchestrator | Saturday 11 April 2026 00:54:11 +0000 (0:00:00.847) 0:00:01.809 ******** 2026-04-11 00:55:10.297894 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-04-11 00:55:10.297898 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-04-11 00:55:10.297902 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-04-11 00:55:10.297905 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-04-11 00:55:10.297909 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-04-11 00:55:10.297914 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:55:10.297921 | orchestrator | 2026-04-11 00:55:10.297924 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:55:10.297928 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-11 00:55:10.297934 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:55:10.297939 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:55:10.297943 | orchestrator | 2026-04-11 00:55:10.297947 | orchestrator | 2026-04-11 00:55:10.297951 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:55:10.297954 | orchestrator | Saturday 11 April 2026 00:55:05 +0000 (0:00:53.673) 0:00:55.482 ******** 2026-04-11 00:55:10.297958 | orchestrator | =============================================================================== 2026-04-11 00:55:10.297962 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 53.67s 2026-04-11 00:55:10.297966 | orchestrator | designate : include_tasks ----------------------------------------------- 0.85s 2026-04-11 00:55:10.297969 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-04-11 00:55:10.297973 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.25s 2026-04-11 00:55:10.297977 | orchestrator | 2026-04-11 00:55:10.297980 | orchestrator | 2026-04-11 00:55:10.297984 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:55:10.297988 | orchestrator | 2026-04-11 00:55:10.297992 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:55:10.297995 | orchestrator | Saturday 11 April 2026 00:54:09 +0000 (0:00:00.276) 0:00:00.276 ******** 2026-04-11 00:55:10.297999 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:55:10.298003 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:55:10.298006 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:55:10.298010 | orchestrator | 2026-04-11 00:55:10.298049 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:55:10.298053 | orchestrator | Saturday 11 April 2026 00:54:10 +0000 (0:00:00.277) 0:00:00.553 ******** 2026-04-11 00:55:10.298061 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-11 00:55:10.298066 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-11 00:55:10.298070 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-11 00:55:10.298073 | orchestrator | 2026-04-11 00:55:10.298077 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-11 00:55:10.298081 | orchestrator | 2026-04-11 00:55:10.298098 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-11 00:55:10.298102 | orchestrator | Saturday 11 April 2026 00:54:10 +0000 (0:00:00.312) 0:00:00.866 ******** 2026-04-11 00:55:10.298106 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:55:10.298110 | orchestrator | 2026-04-11 00:55:10.298114 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-11 00:55:10.298117 | orchestrator | Saturday 11 April 2026 00:54:11 +0000 (0:00:00.684) 0:00:01.551 ******** 2026-04-11 00:55:10.298121 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-04-11 00:55:10.298126 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-04-11 00:55:10.298133 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-04-11 00:55:10.298139 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-04-11 00:55:10.298151 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-04-11 00:55:10.298156 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:55:10.298165 | orchestrator | 2026-04-11 00:55:10.298172 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:55:10.298178 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-11 00:55:10.298185 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:55:10.298191 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:55:10.298199 | orchestrator | 2026-04-11 00:55:10.298204 | orchestrator | 2026-04-11 00:55:10.298210 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:55:10.298217 | orchestrator | Saturday 11 April 2026 00:55:05 +0000 (0:00:53.850) 0:00:55.401 ******** 2026-04-11 00:55:10.298224 | orchestrator | =============================================================================== 2026-04-11 00:55:10.298231 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 53.85s 2026-04-11 00:55:10.298238 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.68s 2026-04-11 00:55:10.298242 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2026-04-11 00:55:10.298246 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-11 00:55:10.298249 | orchestrator | 2026-04-11 00:55:10.298253 | orchestrator | 2026-04-11 00:55:10.298257 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:55:10.298260 | orchestrator | 2026-04-11 00:55:10.298264 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:55:10.298272 | orchestrator | Saturday 11 April 2026 00:54:09 +0000 (0:00:00.315) 0:00:00.315 ******** 2026-04-11 00:55:10.298276 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:55:10.298280 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:55:10.298284 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:55:10.298287 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:55:10.298291 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:55:10.298295 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:55:10.298299 | orchestrator | 2026-04-11 00:55:10.298302 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:55:10.298306 | orchestrator | Saturday 11 April 2026 00:54:10 +0000 (0:00:00.857) 0:00:01.173 ******** 2026-04-11 00:55:10.298310 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-11 00:55:10.298314 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-11 00:55:10.298317 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-11 00:55:10.298321 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-11 00:55:10.298325 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-11 00:55:10.298329 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-11 00:55:10.298332 | orchestrator | 2026-04-11 00:55:10.298336 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-11 00:55:10.298340 | orchestrator | 2026-04-11 00:55:10.298343 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 00:55:10.298347 | orchestrator | Saturday 11 April 2026 00:54:11 +0000 (0:00:00.785) 0:00:01.958 ******** 2026-04-11 00:55:10.298351 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:55:10.298355 | orchestrator | 2026-04-11 00:55:10.298359 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-11 00:55:10.298362 | orchestrator | Saturday 11 April 2026 00:54:12 +0000 (0:00:01.035) 0:00:02.993 ******** 2026-04-11 00:55:10.298366 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:55:10.298370 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:55:10.298374 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:55:10.298378 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:55:10.298385 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:55:10.298389 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:55:10.298392 | orchestrator | 2026-04-11 00:55:10.298396 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-11 00:55:10.298400 | orchestrator | Saturday 11 April 2026 00:54:13 +0000 (0:00:01.358) 0:00:04.352 ******** 2026-04-11 00:55:10.298404 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:55:10.298407 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:55:10.298411 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:55:10.298415 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:55:10.298418 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:55:10.298422 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:55:10.298426 | orchestrator | 2026-04-11 00:55:10.298429 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-11 00:55:10.298433 | orchestrator | Saturday 11 April 2026 00:54:14 +0000 (0:00:01.040) 0:00:05.393 ******** 2026-04-11 00:55:10.298437 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:55:10.298441 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:55:10.298444 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:55:10.298448 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:55:10.298452 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:55:10.298456 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:55:10.298459 | orchestrator | 2026-04-11 00:55:10.298466 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-11 00:55:10.298470 | orchestrator | Saturday 11 April 2026 00:54:15 +0000 (0:00:00.515) 0:00:05.908 ******** 2026-04-11 00:55:10.298474 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:55:10.298477 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:55:10.298485 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:55:10.298488 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:55:10.298492 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:55:10.298496 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:55:10.298500 | orchestrator | 2026-04-11 00:55:10.298505 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-11 00:55:10.298511 | orchestrator | Saturday 11 April 2026 00:54:15 +0000 (0:00:00.603) 0:00:06.511 ******** 2026-04-11 00:55:10.298516 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-04-11 00:55:10.298523 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-04-11 00:55:10.298529 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-04-11 00:55:10.298535 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-04-11 00:55:10.298541 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-04-11 00:55:10.298547 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:55:10.298556 | orchestrator | 2026-04-11 00:55:10.298563 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:55:10.298567 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2026-04-11 00:55:10.298571 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:55:10.298574 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:55:10.298578 | orchestrator | testbed-node-3 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:55:10.298582 | orchestrator | testbed-node-4 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:55:10.298586 | orchestrator | testbed-node-5 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 00:55:10.298590 | orchestrator | 2026-04-11 00:55:10.298593 | orchestrator | 2026-04-11 00:55:10.298597 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:55:10.298601 | orchestrator | Saturday 11 April 2026 00:55:09 +0000 (0:00:53.329) 0:00:59.840 ******** 2026-04-11 00:55:10.298604 | orchestrator | =============================================================================== 2026-04-11 00:55:10.298608 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 53.33s 2026-04-11 00:55:10.298612 | orchestrator | neutron : Get container facts ------------------------------------------- 1.36s 2026-04-11 00:55:10.298616 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.04s 2026-04-11 00:55:10.298619 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.04s 2026-04-11 00:55:10.298623 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-04-11 00:55:10.298627 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-04-11 00:55:10.298633 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.60s 2026-04-11 00:55:10.298637 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.52s 2026-04-11 00:55:10.298646 | orchestrator | 2026-04-11 00:55:10 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:10.298651 | orchestrator | 2026-04-11 00:55:10 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:10.298655 | orchestrator | 2026-04-11 00:55:10 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:10.298658 | orchestrator | 2026-04-11 00:55:10 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:10.298662 | orchestrator | 2026-04-11 00:55:10 | INFO  | Task 1929e85a-0e61-47a6-b76e-cc5e45b5e13f is in state SUCCESS 2026-04-11 00:55:10.298666 | orchestrator | 2026-04-11 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:13.339101 | orchestrator | 2026-04-11 00:55:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:13.339670 | orchestrator | 2026-04-11 00:55:13 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:13.340620 | orchestrator | 2026-04-11 00:55:13 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:13.341501 | orchestrator | 2026-04-11 00:55:13 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:13.342276 | orchestrator | 2026-04-11 00:55:13 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:13.342400 | orchestrator | 2026-04-11 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:16.384793 | orchestrator | 2026-04-11 00:55:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:16.386385 | orchestrator | 2026-04-11 00:55:16 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:16.388741 | orchestrator | 2026-04-11 00:55:16 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:16.389743 | orchestrator | 2026-04-11 00:55:16 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:16.391608 | orchestrator | 2026-04-11 00:55:16 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:16.391802 | orchestrator | 2026-04-11 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:19.439490 | orchestrator | 2026-04-11 00:55:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:19.442182 | orchestrator | 2026-04-11 00:55:19 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:19.446204 | orchestrator | 2026-04-11 00:55:19 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:19.447863 | orchestrator | 2026-04-11 00:55:19 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:19.449734 | orchestrator | 2026-04-11 00:55:19 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:19.449802 | orchestrator | 2026-04-11 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:22.493910 | orchestrator | 2026-04-11 00:55:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:22.495437 | orchestrator | 2026-04-11 00:55:22 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:22.498520 | orchestrator | 2026-04-11 00:55:22 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:22.499732 | orchestrator | 2026-04-11 00:55:22 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:22.502178 | orchestrator | 2026-04-11 00:55:22 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:22.502227 | orchestrator | 2026-04-11 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:25.542549 | orchestrator | 2026-04-11 00:55:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:25.543131 | orchestrator | 2026-04-11 00:55:25 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:25.545760 | orchestrator | 2026-04-11 00:55:25 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:25.546834 | orchestrator | 2026-04-11 00:55:25 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:25.548298 | orchestrator | 2026-04-11 00:55:25 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:25.548357 | orchestrator | 2026-04-11 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:28.597234 | orchestrator | 2026-04-11 00:55:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:28.598966 | orchestrator | 2026-04-11 00:55:28 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:28.600592 | orchestrator | 2026-04-11 00:55:28 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:28.604559 | orchestrator | 2026-04-11 00:55:28 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:28.605664 | orchestrator | 2026-04-11 00:55:28 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:28.605719 | orchestrator | 2026-04-11 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:31.651010 | orchestrator | 2026-04-11 00:55:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:31.652767 | orchestrator | 2026-04-11 00:55:31 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:31.654687 | orchestrator | 2026-04-11 00:55:31 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:31.656438 | orchestrator | 2026-04-11 00:55:31 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:31.657918 | orchestrator | 2026-04-11 00:55:31 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:31.657964 | orchestrator | 2026-04-11 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:34.703193 | orchestrator | 2026-04-11 00:55:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:34.704813 | orchestrator | 2026-04-11 00:55:34 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:34.706489 | orchestrator | 2026-04-11 00:55:34 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:34.707907 | orchestrator | 2026-04-11 00:55:34 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:34.709406 | orchestrator | 2026-04-11 00:55:34 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:34.709608 | orchestrator | 2026-04-11 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:37.752860 | orchestrator | 2026-04-11 00:55:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:37.753448 | orchestrator | 2026-04-11 00:55:37 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:37.754858 | orchestrator | 2026-04-11 00:55:37 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:37.755840 | orchestrator | 2026-04-11 00:55:37 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:37.756725 | orchestrator | 2026-04-11 00:55:37 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:37.756862 | orchestrator | 2026-04-11 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:40.802222 | orchestrator | 2026-04-11 00:55:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:40.805604 | orchestrator | 2026-04-11 00:55:40 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state STARTED 2026-04-11 00:55:40.809075 | orchestrator | 2026-04-11 00:55:40 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:40.811374 | orchestrator | 2026-04-11 00:55:40 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:40.813855 | orchestrator | 2026-04-11 00:55:40 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:40.813912 | orchestrator | 2026-04-11 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:43.854323 | orchestrator | 2026-04-11 00:55:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:43.855628 | orchestrator | 2026-04-11 00:55:43 | INFO  | Task e7168c9d-85ff-4959-85f0-9a4ed9e6d8b3 is in state SUCCESS 2026-04-11 00:55:43.857684 | orchestrator | 2026-04-11 00:55:43 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:43.858949 | orchestrator | 2026-04-11 00:55:43 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:43.860514 | orchestrator | 2026-04-11 00:55:43 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:43.860586 | orchestrator | 2026-04-11 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:46.910387 | orchestrator | 2026-04-11 00:55:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:46.911247 | orchestrator | 2026-04-11 00:55:46 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:46.912393 | orchestrator | 2026-04-11 00:55:46 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:46.915281 | orchestrator | 2026-04-11 00:55:46 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:46.915338 | orchestrator | 2026-04-11 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:49.961570 | orchestrator | 2026-04-11 00:55:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:49.963678 | orchestrator | 2026-04-11 00:55:49 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:49.965617 | orchestrator | 2026-04-11 00:55:49 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:49.967990 | orchestrator | 2026-04-11 00:55:49 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:49.968024 | orchestrator | 2026-04-11 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:53.015771 | orchestrator | 2026-04-11 00:55:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:53.017881 | orchestrator | 2026-04-11 00:55:53 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:53.019781 | orchestrator | 2026-04-11 00:55:53 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:53.021142 | orchestrator | 2026-04-11 00:55:53 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:53.021220 | orchestrator | 2026-04-11 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:56.068718 | orchestrator | 2026-04-11 00:55:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:56.070535 | orchestrator | 2026-04-11 00:55:56 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:56.072454 | orchestrator | 2026-04-11 00:55:56 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:56.074194 | orchestrator | 2026-04-11 00:55:56 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:56.074257 | orchestrator | 2026-04-11 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:55:59.127025 | orchestrator | 2026-04-11 00:55:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:55:59.132401 | orchestrator | 2026-04-11 00:55:59 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:55:59.135326 | orchestrator | 2026-04-11 00:55:59 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:55:59.137495 | orchestrator | 2026-04-11 00:55:59 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:55:59.137546 | orchestrator | 2026-04-11 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:02.176207 | orchestrator | 2026-04-11 00:56:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:02.177941 | orchestrator | 2026-04-11 00:56:02 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state STARTED 2026-04-11 00:56:02.179258 | orchestrator | 2026-04-11 00:56:02 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:02.180509 | orchestrator | 2026-04-11 00:56:02 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state STARTED 2026-04-11 00:56:02.180550 | orchestrator | 2026-04-11 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:05.233879 | orchestrator | 2026-04-11 00:56:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:05.233937 | orchestrator | 2026-04-11 00:56:05 | INFO  | Task cb8fdbbd-7c1c-4dbb-8d03-f6b9cda0cb4c is in state SUCCESS 2026-04-11 00:56:05.235151 | orchestrator | 2026-04-11 00:56:05 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:05.236379 | orchestrator | 2026-04-11 00:56:05 | INFO  | Task a6b6ea8d-c62b-42cd-b7d0-3084defabe1a is in state SUCCESS 2026-04-11 00:56:05.236694 | orchestrator | 2026-04-11 00:56:05.236712 | orchestrator | 2026-04-11 00:56:05.236720 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-11 00:56:05.236727 | orchestrator | 2026-04-11 00:56:05.236734 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-11 00:56:05.236741 | orchestrator | Saturday 11 April 2026 00:54:10 +0000 (0:00:00.118) 0:00:00.118 ******** 2026-04-11 00:56:05.236748 | orchestrator | changed: [localhost] 2026-04-11 00:56:05.236756 | orchestrator | 2026-04-11 00:56:05.236763 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-11 00:56:05.236769 | orchestrator | Saturday 11 April 2026 00:54:11 +0000 (0:00:00.950) 0:00:01.069 ******** 2026-04-11 00:56:05.236776 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-04-11 00:56:05.236782 | orchestrator | changed: [localhost] 2026-04-11 00:56:05.236788 | orchestrator | 2026-04-11 00:56:05.236795 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-11 00:56:05.236826 | orchestrator | Saturday 11 April 2026 00:55:15 +0000 (0:01:04.152) 0:01:05.221 ******** 2026-04-11 00:56:05.236833 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-04-11 00:56:05.236840 | orchestrator | changed: [localhost] 2026-04-11 00:56:05.236846 | orchestrator | 2026-04-11 00:56:05.236853 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:56:05.236859 | orchestrator | 2026-04-11 00:56:05.236866 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:56:05.236873 | orchestrator | Saturday 11 April 2026 00:55:41 +0000 (0:00:26.019) 0:01:31.241 ******** 2026-04-11 00:56:05.236880 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:05.236887 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:05.236893 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:05.236900 | orchestrator | 2026-04-11 00:56:05.236907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:56:05.236914 | orchestrator | Saturday 11 April 2026 00:55:41 +0000 (0:00:00.289) 0:01:31.530 ******** 2026-04-11 00:56:05.236921 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-11 00:56:05.236928 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-11 00:56:05.236935 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-11 00:56:05.236942 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-11 00:56:05.236948 | orchestrator | 2026-04-11 00:56:05.236955 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-11 00:56:05.236962 | orchestrator | skipping: no hosts matched 2026-04-11 00:56:05.236969 | orchestrator | 2026-04-11 00:56:05.236976 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:56:05.236983 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.236991 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.236999 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.237006 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.237013 | orchestrator | 2026-04-11 00:56:05.237020 | orchestrator | 2026-04-11 00:56:05.237027 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:56:05.237033 | orchestrator | Saturday 11 April 2026 00:55:42 +0000 (0:00:00.387) 0:01:31.918 ******** 2026-04-11 00:56:05.237040 | orchestrator | =============================================================================== 2026-04-11 00:56:05.237047 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 64.15s 2026-04-11 00:56:05.237054 | orchestrator | Download ironic-agent kernel ------------------------------------------- 26.02s 2026-04-11 00:56:05.237061 | orchestrator | Ensure the destination directory exists --------------------------------- 0.95s 2026-04-11 00:56:05.237068 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-04-11 00:56:05.237075 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-11 00:56:05.237081 | orchestrator | 2026-04-11 00:56:05.237088 | orchestrator | 2026-04-11 00:56:05.237095 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:56:05.237102 | orchestrator | 2026-04-11 00:56:05.237109 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:56:05.237116 | orchestrator | Saturday 11 April 2026 00:55:08 +0000 (0:00:00.304) 0:00:00.304 ******** 2026-04-11 00:56:05.237122 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:05.237129 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:05.237141 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:05.237148 | orchestrator | 2026-04-11 00:56:05.237155 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:56:05.237162 | orchestrator | Saturday 11 April 2026 00:55:08 +0000 (0:00:00.275) 0:00:00.580 ******** 2026-04-11 00:56:05.237168 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-11 00:56:05.237175 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-11 00:56:05.237182 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-11 00:56:05.237189 | orchestrator | 2026-04-11 00:56:05.237195 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-11 00:56:05.237202 | orchestrator | 2026-04-11 00:56:05.237209 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-11 00:56:05.237215 | orchestrator | Saturday 11 April 2026 00:55:09 +0000 (0:00:00.279) 0:00:00.859 ******** 2026-04-11 00:56:05.237229 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:05.237236 | orchestrator | 2026-04-11 00:56:05.237242 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-11 00:56:05.237249 | orchestrator | Saturday 11 April 2026 00:55:09 +0000 (0:00:00.697) 0:00:01.556 ******** 2026-04-11 00:56:05.237255 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-04-11 00:56:05.237262 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-04-11 00:56:05.237269 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-04-11 00:56:05.237275 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-04-11 00:56:05.237282 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-04-11 00:56:05.237292 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:56:05.237300 | orchestrator | 2026-04-11 00:56:05.237306 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:56:05.237312 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.237319 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.237325 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.237332 | orchestrator | 2026-04-11 00:56:05.237338 | orchestrator | 2026-04-11 00:56:05.237345 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:56:05.237351 | orchestrator | Saturday 11 April 2026 00:56:03 +0000 (0:00:53.752) 0:00:55.309 ******** 2026-04-11 00:56:05.237358 | orchestrator | =============================================================================== 2026-04-11 00:56:05.237365 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 53.75s 2026-04-11 00:56:05.237374 | orchestrator | placement : include_tasks ----------------------------------------------- 0.70s 2026-04-11 00:56:05.237385 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.28s 2026-04-11 00:56:05.237396 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-11 00:56:05.237407 | orchestrator | 2026-04-11 00:56:05.237416 | orchestrator | 2026-04-11 00:56:05.237425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 00:56:05.237438 | orchestrator | 2026-04-11 00:56:05.237444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 00:56:05.237450 | orchestrator | Saturday 11 April 2026 00:55:09 +0000 (0:00:00.328) 0:00:00.328 ******** 2026-04-11 00:56:05.237456 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:05.237463 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:05.237469 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:05.237476 | orchestrator | 2026-04-11 00:56:05.237482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 00:56:05.237488 | orchestrator | Saturday 11 April 2026 00:55:09 +0000 (0:00:00.287) 0:00:00.615 ******** 2026-04-11 00:56:05.237494 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-11 00:56:05.237501 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-11 00:56:05.237507 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-11 00:56:05.237512 | orchestrator | 2026-04-11 00:56:05.237519 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-11 00:56:05.237524 | orchestrator | 2026-04-11 00:56:05.237530 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-11 00:56:05.237536 | orchestrator | Saturday 11 April 2026 00:55:09 +0000 (0:00:00.351) 0:00:00.966 ******** 2026-04-11 00:56:05.237542 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:05.237562 | orchestrator | 2026-04-11 00:56:05.237569 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-11 00:56:05.237576 | orchestrator | Saturday 11 April 2026 00:55:10 +0000 (0:00:00.727) 0:00:01.694 ******** 2026-04-11 00:56:05.237582 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-04-11 00:56:05.237588 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-04-11 00:56:05.237594 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-04-11 00:56:05.237600 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-04-11 00:56:05.237606 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-04-11 00:56:05.237618 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 00:56:05.237627 | orchestrator | 2026-04-11 00:56:05.237634 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:56:05.237641 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.237647 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.237659 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 00:56:05.237666 | orchestrator | 2026-04-11 00:56:05.237672 | orchestrator | 2026-04-11 00:56:05.237678 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:56:05.237684 | orchestrator | Saturday 11 April 2026 00:56:03 +0000 (0:00:53.461) 0:00:55.155 ******** 2026-04-11 00:56:05.237690 | orchestrator | =============================================================================== 2026-04-11 00:56:05.237695 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 53.46s 2026-04-11 00:56:05.237706 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.73s 2026-04-11 00:56:05.237712 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-04-11 00:56:05.237718 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-11 00:56:05.237725 | orchestrator | 2026-04-11 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:08.279032 | orchestrator | 2026-04-11 00:56:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:08.280336 | orchestrator | 2026-04-11 00:56:08 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:08.280376 | orchestrator | 2026-04-11 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:11.323441 | orchestrator | 2026-04-11 00:56:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:11.325427 | orchestrator | 2026-04-11 00:56:11 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:11.325490 | orchestrator | 2026-04-11 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:14.372383 | orchestrator | 2026-04-11 00:56:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:14.374293 | orchestrator | 2026-04-11 00:56:14 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:14.374361 | orchestrator | 2026-04-11 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:17.425106 | orchestrator | 2026-04-11 00:56:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:17.425656 | orchestrator | 2026-04-11 00:56:17 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:17.425823 | orchestrator | 2026-04-11 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:20.478130 | orchestrator | 2026-04-11 00:56:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:20.479131 | orchestrator | 2026-04-11 00:56:20 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:20.479193 | orchestrator | 2026-04-11 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:23.522248 | orchestrator | 2026-04-11 00:56:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:23.524047 | orchestrator | 2026-04-11 00:56:23 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:23.524088 | orchestrator | 2026-04-11 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:26.571570 | orchestrator | 2026-04-11 00:56:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:26.573609 | orchestrator | 2026-04-11 00:56:26 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:26.573862 | orchestrator | 2026-04-11 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:29.627807 | orchestrator | 2026-04-11 00:56:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:29.629949 | orchestrator | 2026-04-11 00:56:29 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state STARTED 2026-04-11 00:56:29.630010 | orchestrator | 2026-04-11 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:32.685885 | orchestrator | 2026-04-11 00:56:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:32.691916 | orchestrator | 2026-04-11 00:56:32 | INFO  | Task aafa082d-ae32-44c8-9ca8-8c613cba6435 is in state SUCCESS 2026-04-11 00:56:32.693871 | orchestrator | 2026-04-11 00:56:32.693910 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 00:56:32.693916 | orchestrator | 2.16.14 2026-04-11 00:56:32.693921 | orchestrator | 2026-04-11 00:56:32.693925 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-11 00:56:32.693929 | orchestrator | 2026-04-11 00:56:32.693933 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 00:56:32.693937 | orchestrator | Saturday 11 April 2026 00:45:38 +0000 (0:00:00.693) 0:00:00.693 ******** 2026-04-11 00:56:32.693948 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.693953 | orchestrator | 2026-04-11 00:56:32.693957 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 00:56:32.693961 | orchestrator | Saturday 11 April 2026 00:45:39 +0000 (0:00:01.051) 0:00:01.745 ******** 2026-04-11 00:56:32.693965 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.693969 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.693972 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.693976 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.693980 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.693984 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.693987 | orchestrator | 2026-04-11 00:56:32.693991 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 00:56:32.693995 | orchestrator | Saturday 11 April 2026 00:45:41 +0000 (0:00:02.035) 0:00:03.781 ******** 2026-04-11 00:56:32.694008 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.694036 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.694041 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.694045 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.694049 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.694053 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.694057 | orchestrator | 2026-04-11 00:56:32.694061 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 00:56:32.694065 | orchestrator | Saturday 11 April 2026 00:45:41 +0000 (0:00:00.580) 0:00:04.362 ******** 2026-04-11 00:56:32.694068 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.694072 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.694076 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.694080 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.694083 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.694087 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.694091 | orchestrator | 2026-04-11 00:56:32.694095 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 00:56:32.694099 | orchestrator | Saturday 11 April 2026 00:45:42 +0000 (0:00:00.945) 0:00:05.307 ******** 2026-04-11 00:56:32.694103 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.694107 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.694110 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.694114 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.694118 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.694122 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.694125 | orchestrator | 2026-04-11 00:56:32.694135 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 00:56:32.694139 | orchestrator | Saturday 11 April 2026 00:45:43 +0000 (0:00:00.687) 0:00:05.994 ******** 2026-04-11 00:56:32.694147 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.694151 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.694155 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.694159 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.694163 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.694167 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.694170 | orchestrator | 2026-04-11 00:56:32.694174 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 00:56:32.694178 | orchestrator | Saturday 11 April 2026 00:45:44 +0000 (0:00:01.014) 0:00:07.009 ******** 2026-04-11 00:56:32.694191 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.694195 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.694199 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.694204 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.694208 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.694211 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.694217 | orchestrator | 2026-04-11 00:56:32.694224 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 00:56:32.694235 | orchestrator | Saturday 11 April 2026 00:45:45 +0000 (0:00:00.965) 0:00:07.974 ******** 2026-04-11 00:56:32.694241 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694248 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.694262 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.694269 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.694276 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.694282 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.694289 | orchestrator | 2026-04-11 00:56:32.694300 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 00:56:32.694307 | orchestrator | Saturday 11 April 2026 00:45:46 +0000 (0:00:00.799) 0:00:08.774 ******** 2026-04-11 00:56:32.694314 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.694322 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.694328 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.694335 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.694342 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.694349 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.694355 | orchestrator | 2026-04-11 00:56:32.694362 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 00:56:32.694367 | orchestrator | Saturday 11 April 2026 00:45:47 +0000 (0:00:00.785) 0:00:09.560 ******** 2026-04-11 00:56:32.694371 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:56:32.694375 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:56:32.694380 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:56:32.694384 | orchestrator | 2026-04-11 00:56:32.694391 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 00:56:32.694397 | orchestrator | Saturday 11 April 2026 00:45:48 +0000 (0:00:00.979) 0:00:10.539 ******** 2026-04-11 00:56:32.694401 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.694404 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.694408 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.694423 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.694427 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.694431 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.694435 | orchestrator | 2026-04-11 00:56:32.694439 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 00:56:32.694442 | orchestrator | Saturday 11 April 2026 00:45:49 +0000 (0:00:01.397) 0:00:11.936 ******** 2026-04-11 00:56:32.694446 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:56:32.694454 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:56:32.694458 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:56:32.694462 | orchestrator | 2026-04-11 00:56:32.694466 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 00:56:32.694470 | orchestrator | Saturday 11 April 2026 00:45:51 +0000 (0:00:02.457) 0:00:14.393 ******** 2026-04-11 00:56:32.694475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 00:56:32.694479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 00:56:32.694483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 00:56:32.694503 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694513 | orchestrator | 2026-04-11 00:56:32.694518 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 00:56:32.694523 | orchestrator | Saturday 11 April 2026 00:45:52 +0000 (0:00:00.593) 0:00:14.987 ******** 2026-04-11 00:56:32.694528 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694544 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694549 | orchestrator | 2026-04-11 00:56:32.694553 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 00:56:32.694558 | orchestrator | Saturday 11 April 2026 00:45:54 +0000 (0:00:01.729) 0:00:16.716 ******** 2026-04-11 00:56:32.694563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694574 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694578 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694583 | orchestrator | 2026-04-11 00:56:32.694587 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 00:56:32.694591 | orchestrator | Saturday 11 April 2026 00:45:54 +0000 (0:00:00.105) 0:00:16.822 ******** 2026-04-11 00:56:32.694601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 00:45:49.996101', 'end': '2026-04-11 00:45:50.075450', 'delta': '0:00:00.079349', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 00:45:50.653141', 'end': '2026-04-11 00:45:50.738780', 'delta': '0:00:00.085639', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 00:45:51.568454', 'end': '2026-04-11 00:45:51.654275', 'delta': '0:00:00.085821', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.694621 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694626 | orchestrator | 2026-04-11 00:56:32.694630 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 00:56:32.694634 | orchestrator | Saturday 11 April 2026 00:45:54 +0000 (0:00:00.148) 0:00:16.970 ******** 2026-04-11 00:56:32.694641 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.694646 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.694651 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.694655 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.694660 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.694664 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.694668 | orchestrator | 2026-04-11 00:56:32.694673 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 00:56:32.694677 | orchestrator | Saturday 11 April 2026 00:45:57 +0000 (0:00:02.884) 0:00:19.854 ******** 2026-04-11 00:56:32.694682 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.694686 | orchestrator | 2026-04-11 00:56:32.694690 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 00:56:32.694695 | orchestrator | Saturday 11 April 2026 00:45:58 +0000 (0:00:00.811) 0:00:20.666 ******** 2026-04-11 00:56:32.694699 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694704 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.694708 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.694713 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.694717 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.694721 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.694725 | orchestrator | 2026-04-11 00:56:32.694730 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 00:56:32.694734 | orchestrator | Saturday 11 April 2026 00:45:59 +0000 (0:00:01.033) 0:00:21.699 ******** 2026-04-11 00:56:32.694739 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694743 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.694750 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.694759 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.694768 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.694775 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.694781 | orchestrator | 2026-04-11 00:56:32.694787 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 00:56:32.694793 | orchestrator | Saturday 11 April 2026 00:46:00 +0000 (0:00:00.857) 0:00:22.557 ******** 2026-04-11 00:56:32.694800 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694808 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.694815 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.694827 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.694832 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.694836 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.694840 | orchestrator | 2026-04-11 00:56:32.694846 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 00:56:32.694855 | orchestrator | Saturday 11 April 2026 00:46:01 +0000 (0:00:01.510) 0:00:24.068 ******** 2026-04-11 00:56:32.694864 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694870 | orchestrator | 2026-04-11 00:56:32.694877 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 00:56:32.694883 | orchestrator | Saturday 11 April 2026 00:46:01 +0000 (0:00:00.291) 0:00:24.360 ******** 2026-04-11 00:56:32.694889 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694898 | orchestrator | 2026-04-11 00:56:32.694904 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 00:56:32.694910 | orchestrator | Saturday 11 April 2026 00:46:02 +0000 (0:00:00.287) 0:00:24.647 ******** 2026-04-11 00:56:32.694916 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694922 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.694929 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.694940 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.694947 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.694953 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.694959 | orchestrator | 2026-04-11 00:56:32.694967 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 00:56:32.694974 | orchestrator | Saturday 11 April 2026 00:46:02 +0000 (0:00:00.876) 0:00:25.524 ******** 2026-04-11 00:56:32.694980 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.694986 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.694991 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.694998 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.695012 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.695019 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.695026 | orchestrator | 2026-04-11 00:56:32.695032 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 00:56:32.695039 | orchestrator | Saturday 11 April 2026 00:46:03 +0000 (0:00:00.860) 0:00:26.384 ******** 2026-04-11 00:56:32.695044 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.695048 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.695052 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.695056 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.695059 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.695063 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.695067 | orchestrator | 2026-04-11 00:56:32.695071 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 00:56:32.695075 | orchestrator | Saturday 11 April 2026 00:46:04 +0000 (0:00:01.012) 0:00:27.397 ******** 2026-04-11 00:56:32.695079 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.695083 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.695087 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.695090 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.695094 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.695098 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.695102 | orchestrator | 2026-04-11 00:56:32.695106 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 00:56:32.695110 | orchestrator | Saturday 11 April 2026 00:46:06 +0000 (0:00:01.206) 0:00:28.604 ******** 2026-04-11 00:56:32.695115 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.695119 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.695123 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.695127 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.695133 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.695137 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.695147 | orchestrator | 2026-04-11 00:56:32.695154 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 00:56:32.695160 | orchestrator | Saturday 11 April 2026 00:46:06 +0000 (0:00:00.651) 0:00:29.256 ******** 2026-04-11 00:56:32.695167 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.695174 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.695180 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.695187 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.695192 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.695196 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.695200 | orchestrator | 2026-04-11 00:56:32.695208 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 00:56:32.695212 | orchestrator | Saturday 11 April 2026 00:46:07 +0000 (0:00:00.765) 0:00:30.021 ******** 2026-04-11 00:56:32.695216 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.695220 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.695224 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.695228 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.695232 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.695235 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.695239 | orchestrator | 2026-04-11 00:56:32.695243 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 00:56:32.695247 | orchestrator | Saturday 11 April 2026 00:46:07 +0000 (0:00:00.453) 0:00:30.475 ******** 2026-04-11 00:56:32.695251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e341cbc8--c5ec--51c3--80df--3957b9d9208d-osd--block--e341cbc8--c5ec--51c3--80df--3957b9d9208d', 'dm-uuid-LVM-S0wDmmgSvTIVfiwEGF0g1F0CnemPmWNqM9F7fgHWifJ2wCsLOyzxcm4ZH2xKiHNJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11d3af0d--f4a7--5343--8eab--aab153531328-osd--block--11d3af0d--f4a7--5343--8eab--aab153531328', 'dm-uuid-LVM-tDUd342fe4ZnfDc8Wu8UNw3aUoGr3BRp821E7R7E3xk6XXij1PFL9yNGwrqylWZT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55dbb7e6--b9f3--5799--9073--e4231f2b06a1-osd--block--55dbb7e6--b9f3--5799--9073--e4231f2b06a1', 'dm-uuid-LVM-q4aABbXV26TQdr8S8cV0O5GfkzomOoD2vACehG4dYj4ziiqzz3denVOa1N38WtSr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--296a8e73--f310--5078--85ef--ed795f213448-osd--block--296a8e73--f310--5078--85ef--ed795f213448', 'dm-uuid-LVM-PiByfQxnJtnDQmV6t0YalLZV3p17iPBsOgsdbURyA1D5C5469CzdNcaWVBWmTopI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3fdd62a--f314--5cc9--917d--dd1b13b97b05-osd--block--b3fdd62a--f314--5cc9--917d--dd1b13b97b05', 'dm-uuid-LVM-9bViJP2WnhEwOKK4t0xE1sWvZ4XgBYlv87Aq0YZ8iV3pvrtVTiQyuFt7x0O0f2BF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d143ac2--4627--58f1--a5c6--72fac7fe3e62-osd--block--1d143ac2--4627--58f1--a5c6--72fac7fe3e62', 'dm-uuid-LVM-22G0szTE25Xt7DPLpIaRBVJHOyB57KhVqhCHe8dePTS9MLDWlV9OFtlgP4roAWwd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part1', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part14', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part15', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part16', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e341cbc8--c5ec--51c3--80df--3957b9d9208d-osd--block--e341cbc8--c5ec--51c3--80df--3957b9d9208d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-olOe9G-ekTw-7Vdj-LpbS-YSDd-fjEj-vmBB6x', 'scsi-0QEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684', 'scsi-SQEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--11d3af0d--f4a7--5343--8eab--aab153531328-osd--block--11d3af0d--f4a7--5343--8eab--aab153531328'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Dyo5Vg-NZ73-VPuR-egkh-6Fdi-S64F-ngOBFV', 'scsi-0QEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08', 'scsi-SQEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547', 'scsi-SQEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part1', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part14', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part15', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part16', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--55dbb7e6--b9f3--5799--9073--e4231f2b06a1-osd--block--55dbb7e6--b9f3--5799--9073--e4231f2b06a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HA4VZ2-mxEg-9HSl-BTOI-sGBM-yJfd-TmKx1R', 'scsi-0QEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5', 'scsi-SQEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--296a8e73--f310--5078--85ef--ed795f213448-osd--block--296a8e73--f310--5078--85ef--ed795f213448'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qhUFuc-8WYQ-4Jlf-4HeJ-A0U0-sNN1-ZGuRig', 'scsi-0QEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86', 'scsi-SQEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b', 'scsi-SQEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part14', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part15', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part16', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b3fdd62a--f314--5cc9--917d--dd1b13b97b05-osd--block--b3fdd62a--f314--5cc9--917d--dd1b13b97b05'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Db0l3X-fLkZ-wodJ-EOWN-hKEx-fTZd-L0AKc1', 'scsi-0QEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9', 'scsi-SQEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1d143ac2--4627--58f1--a5c6--72fac7fe3e62-osd--block--1d143ac2--4627--58f1--a5c6--72fac7fe3e62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AUSRl6-Q0zh-jZKV-Fqhx-ykrt-rr44-JZ8eGj', 'scsi-0QEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995', 'scsi-SQEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514', 'scsi-SQEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695700 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.695707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695711 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.695724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695728 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.695732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695736 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.695740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695829 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.695851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:56:32.695882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:56:32.695923 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.695930 | orchestrator | 2026-04-11 00:56:32.695938 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 00:56:32.695948 | orchestrator | Saturday 11 April 2026 00:46:09 +0000 (0:00:01.504) 0:00:31.980 ******** 2026-04-11 00:56:32.695956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e341cbc8--c5ec--51c3--80df--3957b9d9208d-osd--block--e341cbc8--c5ec--51c3--80df--3957b9d9208d', 'dm-uuid-LVM-S0wDmmgSvTIVfiwEGF0g1F0CnemPmWNqM9F7fgHWifJ2wCsLOyzxcm4ZH2xKiHNJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.695962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11d3af0d--f4a7--5343--8eab--aab153531328-osd--block--11d3af0d--f4a7--5343--8eab--aab153531328', 'dm-uuid-LVM-tDUd342fe4ZnfDc8Wu8UNw3aUoGr3BRp821E7R7E3xk6XXij1PFL9yNGwrqylWZT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.695966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.695974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.695978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.695988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55dbb7e6--b9f3--5799--9073--e4231f2b06a1-osd--block--55dbb7e6--b9f3--5799--9073--e4231f2b06a1', 'dm-uuid-LVM-q4aABbXV26TQdr8S8cV0O5GfkzomOoD2vACehG4dYj4ziiqzz3denVOa1N38WtSr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.695992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.695996 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--296a8e73--f310--5078--85ef--ed795f213448-osd--block--296a8e73--f310--5078--85ef--ed795f213448', 'dm-uuid-LVM-PiByfQxnJtnDQmV6t0YalLZV3p17iPBsOgsdbURyA1D5C5469CzdNcaWVBWmTopI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696000 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696010 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696030 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696035 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3fdd62a--f314--5cc9--917d--dd1b13b97b05-osd--block--b3fdd62a--f314--5cc9--917d--dd1b13b97b05', 'dm-uuid-LVM-9bViJP2WnhEwOKK4t0xE1sWvZ4XgBYlv87Aq0YZ8iV3pvrtVTiQyuFt7x0O0f2BF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696039 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d143ac2--4627--58f1--a5c6--72fac7fe3e62-osd--block--1d143ac2--4627--58f1--a5c6--72fac7fe3e62', 'dm-uuid-LVM-22G0szTE25Xt7DPLpIaRBVJHOyB57KhVqhCHe8dePTS9MLDWlV9OFtlgP4roAWwd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696046 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696050 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696071 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696098 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696115 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part1', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part14', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part15', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part16', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696120 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696127 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696131 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696151 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e341cbc8--c5ec--51c3--80df--3957b9d9208d-osd--block--e341cbc8--c5ec--51c3--80df--3957b9d9208d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-olOe9G-ekTw-7Vdj-LpbS-YSDd-fjEj-vmBB6x', 'scsi-0QEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684', 'scsi-SQEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part14', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part15', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part16', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696165 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b3fdd62a--f314--5cc9--917d--dd1b13b97b05-osd--block--b3fdd62a--f314--5cc9--917d--dd1b13b97b05'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Db0l3X-fLkZ-wodJ-EOWN-hKEx-fTZd-L0AKc1', 'scsi-0QEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9', 'scsi-SQEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696181 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1d143ac2--4627--58f1--a5c6--72fac7fe3e62-osd--block--1d143ac2--4627--58f1--a5c6--72fac7fe3e62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AUSRl6-Q0zh-jZKV-Fqhx-ykrt-rr44-JZ8eGj', 'scsi-0QEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995', 'scsi-SQEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--11d3af0d--f4a7--5343--8eab--aab153531328-osd--block--11d3af0d--f4a7--5343--8eab--aab153531328'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Dyo5Vg-NZ73-VPuR-egkh-6Fdi-S64F-ngOBFV', 'scsi-0QEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08', 'scsi-SQEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696200 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514', 'scsi-SQEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696208 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696233 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696251 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696262 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547', 'scsi-SQEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696269 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696276 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696288 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696298 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696303 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696320 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696325 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696329 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696335 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part1', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part14', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part15', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part16', 'scsi-SQEMU_QEMU_HARDDISK_27525bbe-abd4-48e3-93a9-60b04238c189-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696343 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--55dbb7e6--b9f3--5799--9073--e4231f2b06a1-osd--block--55dbb7e6--b9f3--5799--9073--e4231f2b06a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HA4VZ2-mxEg-9HSl-BTOI-sGBM-yJfd-TmKx1R', 'scsi-0QEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5', 'scsi-SQEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696354 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696403 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696424 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--296a8e73--f310--5078--85ef--ed795f213448-osd--block--296a8e73--f310--5078--85ef--ed795f213448'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qhUFuc-8WYQ-4Jlf-4HeJ-A0U0-sNN1-ZGuRig', 'scsi-0QEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86', 'scsi-SQEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696452 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696468 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696474 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b', 'scsi-SQEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696480 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696547 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696559 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec80c904-46fb-4b7e-ade7-56cb246a0394-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696576 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696583 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696590 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.696596 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.696624 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.696633 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.696640 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.696650 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696661 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696667 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696674 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696681 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696688 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696704 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696710 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696719 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b55b66d-3192-4580-be0d-0026f43c659c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696724 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:56:32.696728 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.696732 | orchestrator | 2026-04-11 00:56:32.696738 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 00:56:32.696743 | orchestrator | Saturday 11 April 2026 00:46:10 +0000 (0:00:01.438) 0:00:33.418 ******** 2026-04-11 00:56:32.696747 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.696751 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.696755 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.696764 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.696768 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.696772 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.696776 | orchestrator | 2026-04-11 00:56:32.696780 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 00:56:32.696786 | orchestrator | Saturday 11 April 2026 00:46:11 +0000 (0:00:01.088) 0:00:34.506 ******** 2026-04-11 00:56:32.696790 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.696794 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.696798 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.696801 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.696805 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.696809 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.696813 | orchestrator | 2026-04-11 00:56:32.696816 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 00:56:32.696820 | orchestrator | Saturday 11 April 2026 00:46:12 +0000 (0:00:00.714) 0:00:35.221 ******** 2026-04-11 00:56:32.696824 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.696828 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.696831 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.696835 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.696839 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.696843 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.696846 | orchestrator | 2026-04-11 00:56:32.696850 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 00:56:32.696854 | orchestrator | Saturday 11 April 2026 00:46:13 +0000 (0:00:01.130) 0:00:36.352 ******** 2026-04-11 00:56:32.696858 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.696861 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.696865 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.696869 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.696873 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.696876 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.696880 | orchestrator | 2026-04-11 00:56:32.696884 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 00:56:32.696888 | orchestrator | Saturday 11 April 2026 00:46:15 +0000 (0:00:01.896) 0:00:38.248 ******** 2026-04-11 00:56:32.696891 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.696895 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.696899 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.696903 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.696906 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.696910 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.696914 | orchestrator | 2026-04-11 00:56:32.696918 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 00:56:32.696922 | orchestrator | Saturday 11 April 2026 00:46:17 +0000 (0:00:01.354) 0:00:39.603 ******** 2026-04-11 00:56:32.696927 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.696934 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.696940 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.696947 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.696953 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.696960 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.696967 | orchestrator | 2026-04-11 00:56:32.696972 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 00:56:32.696976 | orchestrator | Saturday 11 April 2026 00:46:17 +0000 (0:00:00.877) 0:00:40.480 ******** 2026-04-11 00:56:32.696980 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-11 00:56:32.696987 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-11 00:56:32.696991 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-11 00:56:32.696994 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-11 00:56:32.696998 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-11 00:56:32.697005 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-11 00:56:32.697009 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-11 00:56:32.697013 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-11 00:56:32.697017 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 00:56:32.697020 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-11 00:56:32.697024 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-11 00:56:32.697028 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 00:56:32.697031 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-11 00:56:32.697035 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-11 00:56:32.697039 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 00:56:32.697043 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 00:56:32.697046 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 00:56:32.697053 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-11 00:56:32.697056 | orchestrator | 2026-04-11 00:56:32.697060 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 00:56:32.697064 | orchestrator | Saturday 11 April 2026 00:46:22 +0000 (0:00:04.217) 0:00:44.698 ******** 2026-04-11 00:56:32.697068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 00:56:32.697071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 00:56:32.697075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 00:56:32.697079 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 00:56:32.697089 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 00:56:32.697092 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 00:56:32.697096 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.697100 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 00:56:32.697107 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 00:56:32.697111 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 00:56:32.697117 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.697121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 00:56:32.697125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 00:56:32.697128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 00:56:32.697132 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.697138 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 00:56:32.697142 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 00:56:32.697146 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 00:56:32.697149 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.697153 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 00:56:32.697157 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 00:56:32.697160 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 00:56:32.697164 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.697168 | orchestrator | 2026-04-11 00:56:32.697172 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 00:56:32.697178 | orchestrator | Saturday 11 April 2026 00:46:23 +0000 (0:00:01.592) 0:00:46.291 ******** 2026-04-11 00:56:32.697182 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.697186 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.697189 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.697193 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.697197 | orchestrator | 2026-04-11 00:56:32.697204 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 00:56:32.697208 | orchestrator | Saturday 11 April 2026 00:46:25 +0000 (0:00:01.528) 0:00:47.819 ******** 2026-04-11 00:56:32.697211 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697215 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.697219 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.697222 | orchestrator | 2026-04-11 00:56:32.697226 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 00:56:32.697230 | orchestrator | Saturday 11 April 2026 00:46:25 +0000 (0:00:00.432) 0:00:48.251 ******** 2026-04-11 00:56:32.697234 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697237 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.697241 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.697245 | orchestrator | 2026-04-11 00:56:32.697249 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 00:56:32.697252 | orchestrator | Saturday 11 April 2026 00:46:26 +0000 (0:00:00.372) 0:00:48.624 ******** 2026-04-11 00:56:32.697256 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697260 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.697264 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.697267 | orchestrator | 2026-04-11 00:56:32.697271 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 00:56:32.697275 | orchestrator | Saturday 11 April 2026 00:46:26 +0000 (0:00:00.635) 0:00:49.259 ******** 2026-04-11 00:56:32.697278 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.697284 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.697290 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.697297 | orchestrator | 2026-04-11 00:56:32.697303 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 00:56:32.697309 | orchestrator | Saturday 11 April 2026 00:46:27 +0000 (0:00:01.161) 0:00:50.421 ******** 2026-04-11 00:56:32.697316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.697322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.697328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.697335 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697343 | orchestrator | 2026-04-11 00:56:32.697347 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 00:56:32.697351 | orchestrator | Saturday 11 April 2026 00:46:28 +0000 (0:00:00.496) 0:00:50.917 ******** 2026-04-11 00:56:32.697354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.697358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.697362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.697366 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697369 | orchestrator | 2026-04-11 00:56:32.697373 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 00:56:32.697377 | orchestrator | Saturday 11 April 2026 00:46:29 +0000 (0:00:01.152) 0:00:52.069 ******** 2026-04-11 00:56:32.697381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.697385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.697388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.697392 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697396 | orchestrator | 2026-04-11 00:56:32.697399 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 00:56:32.697403 | orchestrator | Saturday 11 April 2026 00:46:29 +0000 (0:00:00.438) 0:00:52.508 ******** 2026-04-11 00:56:32.697407 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.697411 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.697415 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.697418 | orchestrator | 2026-04-11 00:56:32.697422 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 00:56:32.697431 | orchestrator | Saturday 11 April 2026 00:46:30 +0000 (0:00:00.859) 0:00:53.367 ******** 2026-04-11 00:56:32.697435 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 00:56:32.697439 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 00:56:32.697458 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 00:56:32.697463 | orchestrator | 2026-04-11 00:56:32.697469 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 00:56:32.697477 | orchestrator | Saturday 11 April 2026 00:46:31 +0000 (0:00:00.804) 0:00:54.171 ******** 2026-04-11 00:56:32.697516 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:56:32.697524 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:56:32.697532 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:56:32.697538 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 00:56:32.697544 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 00:56:32.697550 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 00:56:32.697556 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 00:56:32.697561 | orchestrator | 2026-04-11 00:56:32.697567 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 00:56:32.697573 | orchestrator | Saturday 11 April 2026 00:46:33 +0000 (0:00:01.999) 0:00:56.170 ******** 2026-04-11 00:56:32.697578 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:56:32.697583 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:56:32.697589 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:56:32.697594 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 00:56:32.697600 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 00:56:32.697605 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 00:56:32.697610 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 00:56:32.697616 | orchestrator | 2026-04-11 00:56:32.697621 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 00:56:32.697627 | orchestrator | Saturday 11 April 2026 00:46:36 +0000 (0:00:02.600) 0:00:58.771 ******** 2026-04-11 00:56:32.697633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.697640 | orchestrator | 2026-04-11 00:56:32.697647 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 00:56:32.697653 | orchestrator | Saturday 11 April 2026 00:46:38 +0000 (0:00:02.067) 0:01:00.839 ******** 2026-04-11 00:56:32.697659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.697666 | orchestrator | 2026-04-11 00:56:32.697677 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 00:56:32.697683 | orchestrator | Saturday 11 April 2026 00:46:40 +0000 (0:00:01.857) 0:01:02.696 ******** 2026-04-11 00:56:32.697689 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697696 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.697700 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.697704 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.697708 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.697711 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.697715 | orchestrator | 2026-04-11 00:56:32.697719 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 00:56:32.697728 | orchestrator | Saturday 11 April 2026 00:46:41 +0000 (0:00:01.234) 0:01:03.930 ******** 2026-04-11 00:56:32.697732 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.697735 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.697739 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.697743 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.697747 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.697750 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.697754 | orchestrator | 2026-04-11 00:56:32.697758 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 00:56:32.697762 | orchestrator | Saturday 11 April 2026 00:46:42 +0000 (0:00:01.188) 0:01:05.119 ******** 2026-04-11 00:56:32.697765 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.697769 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.697773 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.697777 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.697780 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.697784 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.697788 | orchestrator | 2026-04-11 00:56:32.697791 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 00:56:32.697795 | orchestrator | Saturday 11 April 2026 00:46:43 +0000 (0:00:00.961) 0:01:06.081 ******** 2026-04-11 00:56:32.697799 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.697802 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.697806 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.697810 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.697814 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.697817 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.697821 | orchestrator | 2026-04-11 00:56:32.697825 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 00:56:32.697828 | orchestrator | Saturday 11 April 2026 00:46:44 +0000 (0:00:01.108) 0:01:07.189 ******** 2026-04-11 00:56:32.697832 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697836 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.697840 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.697843 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.697847 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.697913 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.697919 | orchestrator | 2026-04-11 00:56:32.697923 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 00:56:32.697930 | orchestrator | Saturday 11 April 2026 00:46:45 +0000 (0:00:01.046) 0:01:08.235 ******** 2026-04-11 00:56:32.697934 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697938 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.697941 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.697945 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.697949 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.697955 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.697959 | orchestrator | 2026-04-11 00:56:32.697963 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 00:56:32.697967 | orchestrator | Saturday 11 April 2026 00:46:46 +0000 (0:00:00.707) 0:01:08.942 ******** 2026-04-11 00:56:32.697971 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.697974 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.697978 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.697982 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.697986 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.697989 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.697993 | orchestrator | 2026-04-11 00:56:32.697997 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 00:56:32.698001 | orchestrator | Saturday 11 April 2026 00:46:46 +0000 (0:00:00.440) 0:01:09.383 ******** 2026-04-11 00:56:32.698004 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.698011 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.698037 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.698041 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.698045 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.698048 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.698052 | orchestrator | 2026-04-11 00:56:32.698056 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 00:56:32.698060 | orchestrator | Saturday 11 April 2026 00:46:47 +0000 (0:00:01.065) 0:01:10.448 ******** 2026-04-11 00:56:32.698064 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.698067 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.698071 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.698075 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.698078 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.698082 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.698086 | orchestrator | 2026-04-11 00:56:32.698090 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 00:56:32.698093 | orchestrator | Saturday 11 April 2026 00:46:48 +0000 (0:00:01.078) 0:01:11.526 ******** 2026-04-11 00:56:32.698097 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698101 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698105 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698108 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698112 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698116 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698119 | orchestrator | 2026-04-11 00:56:32.698123 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 00:56:32.698127 | orchestrator | Saturday 11 April 2026 00:46:49 +0000 (0:00:00.651) 0:01:12.178 ******** 2026-04-11 00:56:32.698131 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698135 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698138 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698142 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.698146 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.698149 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.698153 | orchestrator | 2026-04-11 00:56:32.698157 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 00:56:32.698161 | orchestrator | Saturday 11 April 2026 00:46:50 +0000 (0:00:00.471) 0:01:12.649 ******** 2026-04-11 00:56:32.698164 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.698168 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.698172 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.698176 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698179 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698183 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698187 | orchestrator | 2026-04-11 00:56:32.698191 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 00:56:32.698195 | orchestrator | Saturday 11 April 2026 00:46:50 +0000 (0:00:00.649) 0:01:13.299 ******** 2026-04-11 00:56:32.698198 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.698202 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.698206 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.698210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698213 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698217 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698221 | orchestrator | 2026-04-11 00:56:32.698225 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 00:56:32.698229 | orchestrator | Saturday 11 April 2026 00:46:51 +0000 (0:00:00.533) 0:01:13.833 ******** 2026-04-11 00:56:32.698232 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.698236 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.698240 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.698244 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698247 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698251 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698258 | orchestrator | 2026-04-11 00:56:32.698262 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 00:56:32.698266 | orchestrator | Saturday 11 April 2026 00:46:51 +0000 (0:00:00.624) 0:01:14.457 ******** 2026-04-11 00:56:32.698270 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698273 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698277 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698281 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698284 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698288 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698292 | orchestrator | 2026-04-11 00:56:32.698296 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 00:56:32.698299 | orchestrator | Saturday 11 April 2026 00:46:52 +0000 (0:00:00.500) 0:01:14.958 ******** 2026-04-11 00:56:32.698303 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698307 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698311 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698315 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698334 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698339 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698343 | orchestrator | 2026-04-11 00:56:32.698346 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 00:56:32.698350 | orchestrator | Saturday 11 April 2026 00:46:53 +0000 (0:00:00.612) 0:01:15.570 ******** 2026-04-11 00:56:32.698354 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698358 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698361 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698365 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.698371 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.698375 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.698379 | orchestrator | 2026-04-11 00:56:32.698382 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 00:56:32.698386 | orchestrator | Saturday 11 April 2026 00:46:53 +0000 (0:00:00.521) 0:01:16.092 ******** 2026-04-11 00:56:32.698390 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.698394 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.698397 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.698401 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.698405 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.698409 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.698412 | orchestrator | 2026-04-11 00:56:32.698416 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 00:56:32.698420 | orchestrator | Saturday 11 April 2026 00:46:54 +0000 (0:00:00.695) 0:01:16.787 ******** 2026-04-11 00:56:32.698424 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.698427 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.698431 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.698435 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.698439 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.698443 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.698446 | orchestrator | 2026-04-11 00:56:32.698450 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 00:56:32.698454 | orchestrator | Saturday 11 April 2026 00:46:55 +0000 (0:00:00.889) 0:01:17.677 ******** 2026-04-11 00:56:32.698458 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.698461 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.698465 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.698470 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.698477 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.698495 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.698502 | orchestrator | 2026-04-11 00:56:32.698508 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 00:56:32.698514 | orchestrator | Saturday 11 April 2026 00:46:56 +0000 (0:00:01.184) 0:01:18.861 ******** 2026-04-11 00:56:32.698525 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.698531 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.698537 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.698543 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.698548 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.698554 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.698559 | orchestrator | 2026-04-11 00:56:32.698564 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 00:56:32.698571 | orchestrator | Saturday 11 April 2026 00:46:58 +0000 (0:00:01.838) 0:01:20.699 ******** 2026-04-11 00:56:32.698577 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.698583 | orchestrator | 2026-04-11 00:56:32.698590 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 00:56:32.698596 | orchestrator | Saturday 11 April 2026 00:46:59 +0000 (0:00:00.949) 0:01:21.649 ******** 2026-04-11 00:56:32.698603 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698609 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698616 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698623 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698630 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698636 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698643 | orchestrator | 2026-04-11 00:56:32.698648 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 00:56:32.698652 | orchestrator | Saturday 11 April 2026 00:46:59 +0000 (0:00:00.681) 0:01:22.330 ******** 2026-04-11 00:56:32.698656 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698661 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698666 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698672 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698678 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698685 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698691 | orchestrator | 2026-04-11 00:56:32.698697 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 00:56:32.698705 | orchestrator | Saturday 11 April 2026 00:47:00 +0000 (0:00:00.608) 0:01:22.938 ******** 2026-04-11 00:56:32.698711 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 00:56:32.698718 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 00:56:32.698725 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 00:56:32.698732 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 00:56:32.698739 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 00:56:32.698746 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 00:56:32.698751 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 00:56:32.698756 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 00:56:32.698760 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 00:56:32.698764 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 00:56:32.698793 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 00:56:32.698803 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 00:56:32.698808 | orchestrator | 2026-04-11 00:56:32.698815 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 00:56:32.698821 | orchestrator | Saturday 11 April 2026 00:47:02 +0000 (0:00:02.017) 0:01:24.955 ******** 2026-04-11 00:56:32.698826 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.698842 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.698848 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.698854 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.698860 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.698865 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.698871 | orchestrator | 2026-04-11 00:56:32.698878 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 00:56:32.698884 | orchestrator | Saturday 11 April 2026 00:47:03 +0000 (0:00:00.956) 0:01:25.912 ******** 2026-04-11 00:56:32.698890 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698896 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698903 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698910 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698916 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698922 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698929 | orchestrator | 2026-04-11 00:56:32.698936 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 00:56:32.698942 | orchestrator | Saturday 11 April 2026 00:47:04 +0000 (0:00:00.659) 0:01:26.572 ******** 2026-04-11 00:56:32.698949 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.698955 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.698962 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.698968 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.698974 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.698981 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.698985 | orchestrator | 2026-04-11 00:56:32.698989 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 00:56:32.698993 | orchestrator | Saturday 11 April 2026 00:47:04 +0000 (0:00:00.645) 0:01:27.217 ******** 2026-04-11 00:56:32.698996 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699000 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699004 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699007 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699011 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699015 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699019 | orchestrator | 2026-04-11 00:56:32.699022 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 00:56:32.699026 | orchestrator | Saturday 11 April 2026 00:47:05 +0000 (0:00:00.674) 0:01:27.892 ******** 2026-04-11 00:56:32.699030 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.699034 | orchestrator | 2026-04-11 00:56:32.699038 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 00:56:32.699041 | orchestrator | Saturday 11 April 2026 00:47:06 +0000 (0:00:01.209) 0:01:29.101 ******** 2026-04-11 00:56:32.699045 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.699049 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.699053 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.699056 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.699060 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.699064 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.699068 | orchestrator | 2026-04-11 00:56:32.699072 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 00:56:32.699075 | orchestrator | Saturday 11 April 2026 00:48:04 +0000 (0:00:57.645) 0:02:26.746 ******** 2026-04-11 00:56:32.699079 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 00:56:32.699083 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 00:56:32.699087 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 00:56:32.699090 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699094 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 00:56:32.699102 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 00:56:32.699105 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 00:56:32.699109 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699113 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 00:56:32.699117 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 00:56:32.699120 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 00:56:32.699124 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699128 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 00:56:32.699132 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 00:56:32.699135 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 00:56:32.699139 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699143 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 00:56:32.699146 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 00:56:32.699150 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 00:56:32.699154 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699175 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 00:56:32.699180 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 00:56:32.699183 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 00:56:32.699187 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699191 | orchestrator | 2026-04-11 00:56:32.699195 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 00:56:32.699199 | orchestrator | Saturday 11 April 2026 00:48:04 +0000 (0:00:00.704) 0:02:27.451 ******** 2026-04-11 00:56:32.699205 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699209 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699213 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699216 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699220 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699224 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699228 | orchestrator | 2026-04-11 00:56:32.699231 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 00:56:32.699235 | orchestrator | Saturday 11 April 2026 00:48:05 +0000 (0:00:00.536) 0:02:27.988 ******** 2026-04-11 00:56:32.699239 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699243 | orchestrator | 2026-04-11 00:56:32.699247 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 00:56:32.699250 | orchestrator | Saturday 11 April 2026 00:48:05 +0000 (0:00:00.129) 0:02:28.118 ******** 2026-04-11 00:56:32.699254 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699258 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699261 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699265 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699269 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699273 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699276 | orchestrator | 2026-04-11 00:56:32.699280 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 00:56:32.699284 | orchestrator | Saturday 11 April 2026 00:48:06 +0000 (0:00:00.640) 0:02:28.759 ******** 2026-04-11 00:56:32.699288 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699291 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699295 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699299 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699305 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699309 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699313 | orchestrator | 2026-04-11 00:56:32.699316 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 00:56:32.699320 | orchestrator | Saturday 11 April 2026 00:48:06 +0000 (0:00:00.487) 0:02:29.246 ******** 2026-04-11 00:56:32.699324 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699327 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699331 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699335 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699339 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699342 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699346 | orchestrator | 2026-04-11 00:56:32.699350 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 00:56:32.699354 | orchestrator | Saturday 11 April 2026 00:48:07 +0000 (0:00:00.661) 0:02:29.908 ******** 2026-04-11 00:56:32.699357 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.699361 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.699365 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.699369 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.699372 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.699376 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.699380 | orchestrator | 2026-04-11 00:56:32.699383 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 00:56:32.699387 | orchestrator | Saturday 11 April 2026 00:48:10 +0000 (0:00:02.768) 0:02:32.676 ******** 2026-04-11 00:56:32.699391 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.699395 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.699398 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.699402 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.699406 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.699410 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.699413 | orchestrator | 2026-04-11 00:56:32.699417 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 00:56:32.699421 | orchestrator | Saturday 11 April 2026 00:48:10 +0000 (0:00:00.724) 0:02:33.400 ******** 2026-04-11 00:56:32.699425 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.699429 | orchestrator | 2026-04-11 00:56:32.699433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 00:56:32.699437 | orchestrator | Saturday 11 April 2026 00:48:11 +0000 (0:00:01.025) 0:02:34.426 ******** 2026-04-11 00:56:32.699440 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699444 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699448 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699452 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699455 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699459 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699463 | orchestrator | 2026-04-11 00:56:32.699466 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 00:56:32.699470 | orchestrator | Saturday 11 April 2026 00:48:12 +0000 (0:00:00.623) 0:02:35.049 ******** 2026-04-11 00:56:32.699474 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699478 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699481 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699496 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699502 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699506 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699509 | orchestrator | 2026-04-11 00:56:32.699513 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 00:56:32.699517 | orchestrator | Saturday 11 April 2026 00:48:13 +0000 (0:00:00.750) 0:02:35.800 ******** 2026-04-11 00:56:32.699521 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699528 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699544 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699549 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699553 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699556 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699560 | orchestrator | 2026-04-11 00:56:32.699564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 00:56:32.699568 | orchestrator | Saturday 11 April 2026 00:48:13 +0000 (0:00:00.563) 0:02:36.364 ******** 2026-04-11 00:56:32.699572 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699575 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699579 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699585 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699589 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699593 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699596 | orchestrator | 2026-04-11 00:56:32.699600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 00:56:32.699604 | orchestrator | Saturday 11 April 2026 00:48:14 +0000 (0:00:00.805) 0:02:37.169 ******** 2026-04-11 00:56:32.699608 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699611 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699615 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699619 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699622 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699626 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699630 | orchestrator | 2026-04-11 00:56:32.699634 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 00:56:32.699637 | orchestrator | Saturday 11 April 2026 00:48:15 +0000 (0:00:00.567) 0:02:37.737 ******** 2026-04-11 00:56:32.699641 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699647 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699653 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699661 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699671 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699677 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699684 | orchestrator | 2026-04-11 00:56:32.699691 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 00:56:32.699697 | orchestrator | Saturday 11 April 2026 00:48:15 +0000 (0:00:00.674) 0:02:38.411 ******** 2026-04-11 00:56:32.699703 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699709 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699716 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699722 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699728 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699735 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699742 | orchestrator | 2026-04-11 00:56:32.699748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 00:56:32.699756 | orchestrator | Saturday 11 April 2026 00:48:16 +0000 (0:00:00.541) 0:02:38.952 ******** 2026-04-11 00:56:32.699766 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.699772 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.699778 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.699785 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.699791 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.699797 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.699803 | orchestrator | 2026-04-11 00:56:32.699810 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 00:56:32.699816 | orchestrator | Saturday 11 April 2026 00:48:17 +0000 (0:00:00.652) 0:02:39.605 ******** 2026-04-11 00:56:32.699822 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.699827 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.699834 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.699840 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.699855 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.699861 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.699866 | orchestrator | 2026-04-11 00:56:32.699873 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 00:56:32.699879 | orchestrator | Saturday 11 April 2026 00:48:18 +0000 (0:00:01.019) 0:02:40.625 ******** 2026-04-11 00:56:32.699885 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-0, testbed-node-5, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.699891 | orchestrator | 2026-04-11 00:56:32.699897 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 00:56:32.699903 | orchestrator | Saturday 11 April 2026 00:48:18 +0000 (0:00:00.888) 0:02:41.514 ******** 2026-04-11 00:56:32.699910 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-11 00:56:32.699917 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-11 00:56:32.699924 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-11 00:56:32.699930 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-11 00:56:32.699936 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-11 00:56:32.699943 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-11 00:56:32.699949 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-11 00:56:32.699956 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-11 00:56:32.699964 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-11 00:56:32.699971 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-11 00:56:32.699978 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-11 00:56:32.699984 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-11 00:56:32.699990 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-11 00:56:32.699994 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-11 00:56:32.699998 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-11 00:56:32.700002 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-11 00:56:32.700005 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-11 00:56:32.700009 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-11 00:56:32.700035 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-11 00:56:32.700040 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-11 00:56:32.700044 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-11 00:56:32.700047 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-11 00:56:32.700051 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-11 00:56:32.700055 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-11 00:56:32.700059 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-11 00:56:32.700066 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-11 00:56:32.700070 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-11 00:56:32.700074 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-11 00:56:32.700078 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-11 00:56:32.700082 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-11 00:56:32.700085 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-11 00:56:32.700089 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-11 00:56:32.700093 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-11 00:56:32.700097 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-11 00:56:32.700100 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-11 00:56:32.700104 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-11 00:56:32.700112 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 00:56:32.700116 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-11 00:56:32.700120 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-11 00:56:32.700124 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-11 00:56:32.700127 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-11 00:56:32.700131 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 00:56:32.700135 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-11 00:56:32.700138 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-11 00:56:32.700142 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 00:56:32.700146 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-11 00:56:32.700150 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 00:56:32.700153 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-11 00:56:32.700157 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-11 00:56:32.700161 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 00:56:32.700164 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-11 00:56:32.700168 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 00:56:32.700172 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 00:56:32.700175 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-11 00:56:32.700179 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 00:56:32.700183 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-11 00:56:32.700186 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 00:56:32.700190 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 00:56:32.700194 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 00:56:32.700198 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 00:56:32.700201 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 00:56:32.700205 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 00:56:32.700209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 00:56:32.700212 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-11 00:56:32.700216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 00:56:32.700220 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 00:56:32.700224 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 00:56:32.700228 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 00:56:32.700231 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 00:56:32.700235 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-11 00:56:32.700239 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 00:56:32.700242 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 00:56:32.700246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 00:56:32.700250 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 00:56:32.700254 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 00:56:32.700257 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 00:56:32.700264 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 00:56:32.700281 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 00:56:32.700286 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 00:56:32.700289 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 00:56:32.700293 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-11 00:56:32.700297 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 00:56:32.700303 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 00:56:32.700307 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 00:56:32.700310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 00:56:32.700314 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 00:56:32.700318 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-11 00:56:32.700322 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-11 00:56:32.700326 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-11 00:56:32.700329 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 00:56:32.700333 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-11 00:56:32.700337 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-11 00:56:32.700341 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-11 00:56:32.700344 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-11 00:56:32.700348 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-11 00:56:32.700352 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-11 00:56:32.700355 | orchestrator | 2026-04-11 00:56:32.700359 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 00:56:32.700363 | orchestrator | Saturday 11 April 2026 00:48:25 +0000 (0:00:06.753) 0:02:48.267 ******** 2026-04-11 00:56:32.700367 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700370 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700374 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700378 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.700382 | orchestrator | 2026-04-11 00:56:32.700386 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 00:56:32.700390 | orchestrator | Saturday 11 April 2026 00:48:26 +0000 (0:00:00.821) 0:02:49.089 ******** 2026-04-11 00:56:32.700393 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700398 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700401 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700405 | orchestrator | 2026-04-11 00:56:32.700409 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 00:56:32.700413 | orchestrator | Saturday 11 April 2026 00:48:27 +0000 (0:00:00.644) 0:02:49.733 ******** 2026-04-11 00:56:32.700417 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700420 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700424 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700431 | orchestrator | 2026-04-11 00:56:32.700435 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 00:56:32.700439 | orchestrator | Saturday 11 April 2026 00:48:28 +0000 (0:00:01.194) 0:02:50.928 ******** 2026-04-11 00:56:32.700443 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.700447 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.700451 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.700454 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700458 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700462 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700466 | orchestrator | 2026-04-11 00:56:32.700469 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 00:56:32.700473 | orchestrator | Saturday 11 April 2026 00:48:29 +0000 (0:00:00.759) 0:02:51.688 ******** 2026-04-11 00:56:32.700477 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.700481 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.700515 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.700520 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700524 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700528 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700531 | orchestrator | 2026-04-11 00:56:32.700535 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 00:56:32.700539 | orchestrator | Saturday 11 April 2026 00:48:29 +0000 (0:00:00.611) 0:02:52.299 ******** 2026-04-11 00:56:32.700543 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.700547 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.700550 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.700554 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700558 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700562 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700566 | orchestrator | 2026-04-11 00:56:32.700584 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 00:56:32.700589 | orchestrator | Saturday 11 April 2026 00:48:30 +0000 (0:00:00.816) 0:02:53.116 ******** 2026-04-11 00:56:32.700593 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.700596 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.700600 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.700604 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700608 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700612 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700616 | orchestrator | 2026-04-11 00:56:32.700622 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 00:56:32.700626 | orchestrator | Saturday 11 April 2026 00:48:31 +0000 (0:00:00.642) 0:02:53.758 ******** 2026-04-11 00:56:32.700630 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.700633 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.700637 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.700641 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700644 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700648 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700652 | orchestrator | 2026-04-11 00:56:32.700656 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 00:56:32.700660 | orchestrator | Saturday 11 April 2026 00:48:31 +0000 (0:00:00.733) 0:02:54.491 ******** 2026-04-11 00:56:32.700664 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.700668 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.700672 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.700675 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700679 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700683 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700687 | orchestrator | 2026-04-11 00:56:32.700690 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 00:56:32.700697 | orchestrator | Saturday 11 April 2026 00:48:32 +0000 (0:00:00.520) 0:02:55.012 ******** 2026-04-11 00:56:32.700701 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.700705 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.700709 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.700712 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700716 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700720 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700724 | orchestrator | 2026-04-11 00:56:32.700728 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 00:56:32.700731 | orchestrator | Saturday 11 April 2026 00:48:33 +0000 (0:00:00.756) 0:02:55.769 ******** 2026-04-11 00:56:32.700735 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.700739 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.700743 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.700746 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700750 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700754 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700758 | orchestrator | 2026-04-11 00:56:32.700761 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 00:56:32.700765 | orchestrator | Saturday 11 April 2026 00:48:33 +0000 (0:00:00.536) 0:02:56.306 ******** 2026-04-11 00:56:32.700769 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700773 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700776 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700780 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.700784 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.700788 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.700792 | orchestrator | 2026-04-11 00:56:32.700795 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 00:56:32.700799 | orchestrator | Saturday 11 April 2026 00:48:36 +0000 (0:00:02.730) 0:02:59.036 ******** 2026-04-11 00:56:32.700803 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.700807 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.700810 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.700814 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700818 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700822 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700826 | orchestrator | 2026-04-11 00:56:32.700829 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 00:56:32.700833 | orchestrator | Saturday 11 April 2026 00:48:37 +0000 (0:00:00.501) 0:02:59.538 ******** 2026-04-11 00:56:32.700837 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.700841 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.700844 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.700848 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700852 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700856 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700860 | orchestrator | 2026-04-11 00:56:32.700863 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 00:56:32.700867 | orchestrator | Saturday 11 April 2026 00:48:37 +0000 (0:00:00.619) 0:03:00.157 ******** 2026-04-11 00:56:32.700871 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.700875 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.700878 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.700882 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700886 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700890 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700894 | orchestrator | 2026-04-11 00:56:32.700897 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 00:56:32.700901 | orchestrator | Saturday 11 April 2026 00:48:38 +0000 (0:00:00.518) 0:03:00.676 ******** 2026-04-11 00:56:32.700905 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700911 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700915 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.700919 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700936 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.700940 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.700944 | orchestrator | 2026-04-11 00:56:32.700948 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 00:56:32.700952 | orchestrator | Saturday 11 April 2026 00:48:39 +0000 (0:00:00.887) 0:03:01.563 ******** 2026-04-11 00:56:32.700958 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-11 00:56:32.700964 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-11 00:56:32.700968 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-11 00:56:32.700972 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-11 00:56:32.700976 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.700980 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.700984 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-11 00:56:32.700988 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.700995 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-11 00:56:32.701001 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701008 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701014 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701021 | orchestrator | 2026-04-11 00:56:32.701028 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 00:56:32.701035 | orchestrator | Saturday 11 April 2026 00:48:39 +0000 (0:00:00.772) 0:03:02.335 ******** 2026-04-11 00:56:32.701042 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701047 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701050 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701054 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701058 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701062 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701065 | orchestrator | 2026-04-11 00:56:32.701069 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 00:56:32.701076 | orchestrator | Saturday 11 April 2026 00:48:40 +0000 (0:00:00.758) 0:03:03.094 ******** 2026-04-11 00:56:32.701080 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701084 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701088 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701091 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701095 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701099 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701102 | orchestrator | 2026-04-11 00:56:32.701106 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 00:56:32.701110 | orchestrator | Saturday 11 April 2026 00:48:41 +0000 (0:00:00.967) 0:03:04.062 ******** 2026-04-11 00:56:32.701114 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701117 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701121 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701125 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701129 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701132 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701136 | orchestrator | 2026-04-11 00:56:32.701140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 00:56:32.701144 | orchestrator | Saturday 11 April 2026 00:48:42 +0000 (0:00:00.488) 0:03:04.550 ******** 2026-04-11 00:56:32.701148 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701151 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701155 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701159 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701162 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701166 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701170 | orchestrator | 2026-04-11 00:56:32.701174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 00:56:32.701191 | orchestrator | Saturday 11 April 2026 00:48:42 +0000 (0:00:00.655) 0:03:05.205 ******** 2026-04-11 00:56:32.701195 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701199 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701203 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701207 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701210 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701214 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701218 | orchestrator | 2026-04-11 00:56:32.701222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 00:56:32.701228 | orchestrator | Saturday 11 April 2026 00:48:43 +0000 (0:00:00.454) 0:03:05.659 ******** 2026-04-11 00:56:32.701232 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.701236 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.701239 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701243 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701247 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.701251 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701255 | orchestrator | 2026-04-11 00:56:32.701262 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 00:56:32.701270 | orchestrator | Saturday 11 April 2026 00:48:43 +0000 (0:00:00.681) 0:03:06.341 ******** 2026-04-11 00:56:32.701277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.701284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.701289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.701292 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701296 | orchestrator | 2026-04-11 00:56:32.701300 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 00:56:32.701304 | orchestrator | Saturday 11 April 2026 00:48:44 +0000 (0:00:00.455) 0:03:06.796 ******** 2026-04-11 00:56:32.701307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.701314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.701318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.701322 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701325 | orchestrator | 2026-04-11 00:56:32.701329 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 00:56:32.701333 | orchestrator | Saturday 11 April 2026 00:48:44 +0000 (0:00:00.461) 0:03:07.257 ******** 2026-04-11 00:56:32.701337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.701340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.701344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.701348 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701352 | orchestrator | 2026-04-11 00:56:32.701355 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 00:56:32.701359 | orchestrator | Saturday 11 April 2026 00:48:45 +0000 (0:00:00.357) 0:03:07.616 ******** 2026-04-11 00:56:32.701363 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.701367 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.701371 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.701375 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701378 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701382 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701386 | orchestrator | 2026-04-11 00:56:32.701390 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 00:56:32.701393 | orchestrator | Saturday 11 April 2026 00:48:45 +0000 (0:00:00.811) 0:03:08.427 ******** 2026-04-11 00:56:32.701397 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 00:56:32.701401 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 00:56:32.701404 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 00:56:32.701408 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-11 00:56:32.701412 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-11 00:56:32.701416 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701419 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701423 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-11 00:56:32.701427 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701431 | orchestrator | 2026-04-11 00:56:32.701434 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 00:56:32.701438 | orchestrator | Saturday 11 April 2026 00:48:48 +0000 (0:00:02.105) 0:03:10.532 ******** 2026-04-11 00:56:32.701442 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.701446 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.701449 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.701453 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.701457 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.701460 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.701464 | orchestrator | 2026-04-11 00:56:32.701468 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 00:56:32.701472 | orchestrator | Saturday 11 April 2026 00:48:51 +0000 (0:00:03.154) 0:03:13.686 ******** 2026-04-11 00:56:32.701476 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.701479 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.701483 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.701497 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.701501 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.701505 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.701509 | orchestrator | 2026-04-11 00:56:32.701513 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 00:56:32.701517 | orchestrator | Saturday 11 April 2026 00:48:52 +0000 (0:00:01.354) 0:03:15.040 ******** 2026-04-11 00:56:32.701520 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701524 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701530 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701534 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.701538 | orchestrator | 2026-04-11 00:56:32.701542 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-11 00:56:32.701561 | orchestrator | Saturday 11 April 2026 00:48:53 +0000 (0:00:00.799) 0:03:15.839 ******** 2026-04-11 00:56:32.701565 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.701569 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.701573 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.701577 | orchestrator | 2026-04-11 00:56:32.701580 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-11 00:56:32.701584 | orchestrator | Saturday 11 April 2026 00:48:53 +0000 (0:00:00.520) 0:03:16.360 ******** 2026-04-11 00:56:32.701588 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.701592 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.701595 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.701599 | orchestrator | 2026-04-11 00:56:32.701605 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-11 00:56:32.701609 | orchestrator | Saturday 11 April 2026 00:48:55 +0000 (0:00:01.289) 0:03:17.650 ******** 2026-04-11 00:56:32.701613 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 00:56:32.701617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 00:56:32.701620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 00:56:32.701624 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701628 | orchestrator | 2026-04-11 00:56:32.701632 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-11 00:56:32.701636 | orchestrator | Saturday 11 April 2026 00:48:55 +0000 (0:00:00.641) 0:03:18.292 ******** 2026-04-11 00:56:32.701640 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.701644 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.701647 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.701651 | orchestrator | 2026-04-11 00:56:32.701655 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 00:56:32.701659 | orchestrator | Saturday 11 April 2026 00:48:56 +0000 (0:00:00.330) 0:03:18.622 ******** 2026-04-11 00:56:32.701662 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701666 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.701670 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.701674 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.701677 | orchestrator | 2026-04-11 00:56:32.701681 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-11 00:56:32.701685 | orchestrator | Saturday 11 April 2026 00:48:57 +0000 (0:00:01.169) 0:03:19.792 ******** 2026-04-11 00:56:32.701689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.701692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.701696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.701700 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701703 | orchestrator | 2026-04-11 00:56:32.701707 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-11 00:56:32.701711 | orchestrator | Saturday 11 April 2026 00:48:57 +0000 (0:00:00.427) 0:03:20.219 ******** 2026-04-11 00:56:32.701715 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701719 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701722 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701726 | orchestrator | 2026-04-11 00:56:32.701730 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-11 00:56:32.701734 | orchestrator | Saturday 11 April 2026 00:48:58 +0000 (0:00:00.398) 0:03:20.617 ******** 2026-04-11 00:56:32.701737 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701746 | orchestrator | 2026-04-11 00:56:32.701749 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-11 00:56:32.701753 | orchestrator | Saturday 11 April 2026 00:48:58 +0000 (0:00:00.264) 0:03:20.882 ******** 2026-04-11 00:56:32.701757 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701761 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701764 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701768 | orchestrator | 2026-04-11 00:56:32.701772 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-11 00:56:32.701776 | orchestrator | Saturday 11 April 2026 00:48:58 +0000 (0:00:00.325) 0:03:21.207 ******** 2026-04-11 00:56:32.701779 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701783 | orchestrator | 2026-04-11 00:56:32.701787 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-11 00:56:32.701791 | orchestrator | Saturday 11 April 2026 00:48:59 +0000 (0:00:00.699) 0:03:21.907 ******** 2026-04-11 00:56:32.701794 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701798 | orchestrator | 2026-04-11 00:56:32.701802 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-11 00:56:32.701806 | orchestrator | Saturday 11 April 2026 00:48:59 +0000 (0:00:00.233) 0:03:22.140 ******** 2026-04-11 00:56:32.701809 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701813 | orchestrator | 2026-04-11 00:56:32.701817 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-11 00:56:32.701820 | orchestrator | Saturday 11 April 2026 00:48:59 +0000 (0:00:00.129) 0:03:22.269 ******** 2026-04-11 00:56:32.701824 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701828 | orchestrator | 2026-04-11 00:56:32.701832 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-11 00:56:32.701835 | orchestrator | Saturday 11 April 2026 00:48:59 +0000 (0:00:00.243) 0:03:22.513 ******** 2026-04-11 00:56:32.701839 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701843 | orchestrator | 2026-04-11 00:56:32.701850 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-11 00:56:32.701858 | orchestrator | Saturday 11 April 2026 00:49:00 +0000 (0:00:00.257) 0:03:22.770 ******** 2026-04-11 00:56:32.701866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.701873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.701879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.701885 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701891 | orchestrator | 2026-04-11 00:56:32.701897 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-11 00:56:32.701926 | orchestrator | Saturday 11 April 2026 00:49:00 +0000 (0:00:00.434) 0:03:23.204 ******** 2026-04-11 00:56:32.701933 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701939 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.701946 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.701951 | orchestrator | 2026-04-11 00:56:32.701955 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-11 00:56:32.701959 | orchestrator | Saturday 11 April 2026 00:49:00 +0000 (0:00:00.323) 0:03:23.528 ******** 2026-04-11 00:56:32.701963 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701966 | orchestrator | 2026-04-11 00:56:32.701973 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-11 00:56:32.701977 | orchestrator | Saturday 11 April 2026 00:49:01 +0000 (0:00:00.240) 0:03:23.768 ******** 2026-04-11 00:56:32.701981 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.701984 | orchestrator | 2026-04-11 00:56:32.701988 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 00:56:32.701992 | orchestrator | Saturday 11 April 2026 00:49:01 +0000 (0:00:00.196) 0:03:23.965 ******** 2026-04-11 00:56:32.701996 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.701999 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702007 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702011 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.702032 | orchestrator | 2026-04-11 00:56:32.702036 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-11 00:56:32.702039 | orchestrator | Saturday 11 April 2026 00:49:02 +0000 (0:00:01.053) 0:03:25.019 ******** 2026-04-11 00:56:32.702043 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.702047 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.702051 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.702055 | orchestrator | 2026-04-11 00:56:32.702058 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-11 00:56:32.702062 | orchestrator | Saturday 11 April 2026 00:49:02 +0000 (0:00:00.314) 0:03:25.333 ******** 2026-04-11 00:56:32.702066 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.702070 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.702073 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.702077 | orchestrator | 2026-04-11 00:56:32.702081 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-11 00:56:32.702085 | orchestrator | Saturday 11 April 2026 00:49:04 +0000 (0:00:01.372) 0:03:26.705 ******** 2026-04-11 00:56:32.702088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.702092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.702096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.702100 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.702103 | orchestrator | 2026-04-11 00:56:32.702107 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-11 00:56:32.702111 | orchestrator | Saturday 11 April 2026 00:49:04 +0000 (0:00:00.656) 0:03:27.362 ******** 2026-04-11 00:56:32.702115 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.702119 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.702122 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.702126 | orchestrator | 2026-04-11 00:56:32.702130 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 00:56:32.702134 | orchestrator | Saturday 11 April 2026 00:49:05 +0000 (0:00:00.351) 0:03:27.714 ******** 2026-04-11 00:56:32.702137 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702143 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702149 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702154 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.702160 | orchestrator | 2026-04-11 00:56:32.702165 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-11 00:56:32.702172 | orchestrator | Saturday 11 April 2026 00:49:06 +0000 (0:00:01.005) 0:03:28.719 ******** 2026-04-11 00:56:32.702178 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.702181 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.702185 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.702189 | orchestrator | 2026-04-11 00:56:32.702193 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-11 00:56:32.702197 | orchestrator | Saturday 11 April 2026 00:49:06 +0000 (0:00:00.328) 0:03:29.047 ******** 2026-04-11 00:56:32.702200 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.702204 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.702208 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.702211 | orchestrator | 2026-04-11 00:56:32.702215 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-11 00:56:32.702219 | orchestrator | Saturday 11 April 2026 00:49:07 +0000 (0:00:01.250) 0:03:30.297 ******** 2026-04-11 00:56:32.702223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.702226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.702230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.702237 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.702241 | orchestrator | 2026-04-11 00:56:32.702245 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-11 00:56:32.702248 | orchestrator | Saturday 11 April 2026 00:49:08 +0000 (0:00:00.784) 0:03:31.082 ******** 2026-04-11 00:56:32.702252 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.702256 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.702260 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.702263 | orchestrator | 2026-04-11 00:56:32.702267 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-11 00:56:32.702271 | orchestrator | Saturday 11 April 2026 00:49:08 +0000 (0:00:00.414) 0:03:31.496 ******** 2026-04-11 00:56:32.702278 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.702285 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.702292 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.702299 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702306 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702334 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702338 | orchestrator | 2026-04-11 00:56:32.702342 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 00:56:32.702346 | orchestrator | Saturday 11 April 2026 00:49:09 +0000 (0:00:00.513) 0:03:32.010 ******** 2026-04-11 00:56:32.702350 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.702354 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.702357 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.702363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.702367 | orchestrator | 2026-04-11 00:56:32.702371 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-11 00:56:32.702375 | orchestrator | Saturday 11 April 2026 00:49:10 +0000 (0:00:00.868) 0:03:32.878 ******** 2026-04-11 00:56:32.702379 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702383 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702386 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702390 | orchestrator | 2026-04-11 00:56:32.702394 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-11 00:56:32.702398 | orchestrator | Saturday 11 April 2026 00:49:10 +0000 (0:00:00.293) 0:03:33.171 ******** 2026-04-11 00:56:32.702402 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.702405 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.702409 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.702413 | orchestrator | 2026-04-11 00:56:32.702416 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-11 00:56:32.702420 | orchestrator | Saturday 11 April 2026 00:49:11 +0000 (0:00:01.072) 0:03:34.244 ******** 2026-04-11 00:56:32.702424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 00:56:32.702428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 00:56:32.702432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 00:56:32.702435 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702442 | orchestrator | 2026-04-11 00:56:32.702450 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-11 00:56:32.702457 | orchestrator | Saturday 11 April 2026 00:49:12 +0000 (0:00:00.739) 0:03:34.983 ******** 2026-04-11 00:56:32.702464 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702468 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702472 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702476 | orchestrator | 2026-04-11 00:56:32.702480 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-11 00:56:32.702494 | orchestrator | 2026-04-11 00:56:32.702498 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 00:56:32.702502 | orchestrator | Saturday 11 April 2026 00:49:13 +0000 (0:00:00.684) 0:03:35.668 ******** 2026-04-11 00:56:32.702511 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.702515 | orchestrator | 2026-04-11 00:56:32.702519 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 00:56:32.702522 | orchestrator | Saturday 11 April 2026 00:49:13 +0000 (0:00:00.456) 0:03:36.124 ******** 2026-04-11 00:56:32.702526 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.702530 | orchestrator | 2026-04-11 00:56:32.702534 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 00:56:32.702538 | orchestrator | Saturday 11 April 2026 00:49:14 +0000 (0:00:00.582) 0:03:36.707 ******** 2026-04-11 00:56:32.702541 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702545 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702549 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702552 | orchestrator | 2026-04-11 00:56:32.702556 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 00:56:32.702560 | orchestrator | Saturday 11 April 2026 00:49:14 +0000 (0:00:00.618) 0:03:37.326 ******** 2026-04-11 00:56:32.702564 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702567 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702571 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702575 | orchestrator | 2026-04-11 00:56:32.702579 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 00:56:32.702583 | orchestrator | Saturday 11 April 2026 00:49:15 +0000 (0:00:00.306) 0:03:37.633 ******** 2026-04-11 00:56:32.702586 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702590 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702594 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702598 | orchestrator | 2026-04-11 00:56:32.702601 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 00:56:32.702605 | orchestrator | Saturday 11 April 2026 00:49:15 +0000 (0:00:00.285) 0:03:37.918 ******** 2026-04-11 00:56:32.702609 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702613 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702616 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702620 | orchestrator | 2026-04-11 00:56:32.702624 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 00:56:32.702627 | orchestrator | Saturday 11 April 2026 00:49:15 +0000 (0:00:00.435) 0:03:38.353 ******** 2026-04-11 00:56:32.702631 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702635 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702639 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702642 | orchestrator | 2026-04-11 00:56:32.702646 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 00:56:32.702650 | orchestrator | Saturday 11 April 2026 00:49:16 +0000 (0:00:00.671) 0:03:39.024 ******** 2026-04-11 00:56:32.702654 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702657 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702661 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702665 | orchestrator | 2026-04-11 00:56:32.702669 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 00:56:32.702673 | orchestrator | Saturday 11 April 2026 00:49:16 +0000 (0:00:00.275) 0:03:39.299 ******** 2026-04-11 00:56:32.702692 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702696 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702700 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702704 | orchestrator | 2026-04-11 00:56:32.702707 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 00:56:32.702711 | orchestrator | Saturday 11 April 2026 00:49:17 +0000 (0:00:00.261) 0:03:39.561 ******** 2026-04-11 00:56:32.702715 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702719 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702722 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702729 | orchestrator | 2026-04-11 00:56:32.702735 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 00:56:32.702739 | orchestrator | Saturday 11 April 2026 00:49:17 +0000 (0:00:00.917) 0:03:40.478 ******** 2026-04-11 00:56:32.702743 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702747 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702750 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702754 | orchestrator | 2026-04-11 00:56:32.702758 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 00:56:32.702762 | orchestrator | Saturday 11 April 2026 00:49:18 +0000 (0:00:00.667) 0:03:41.146 ******** 2026-04-11 00:56:32.702765 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702769 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702773 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702777 | orchestrator | 2026-04-11 00:56:32.702780 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 00:56:32.702784 | orchestrator | Saturday 11 April 2026 00:49:18 +0000 (0:00:00.308) 0:03:41.455 ******** 2026-04-11 00:56:32.702788 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702791 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702795 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702799 | orchestrator | 2026-04-11 00:56:32.702803 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 00:56:32.702806 | orchestrator | Saturday 11 April 2026 00:49:19 +0000 (0:00:00.268) 0:03:41.723 ******** 2026-04-11 00:56:32.702810 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702814 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702818 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702821 | orchestrator | 2026-04-11 00:56:32.702825 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 00:56:32.702829 | orchestrator | Saturday 11 April 2026 00:49:19 +0000 (0:00:00.266) 0:03:41.989 ******** 2026-04-11 00:56:32.702832 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702836 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702840 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702844 | orchestrator | 2026-04-11 00:56:32.702847 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 00:56:32.702851 | orchestrator | Saturday 11 April 2026 00:49:19 +0000 (0:00:00.457) 0:03:42.446 ******** 2026-04-11 00:56:32.702855 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702859 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702862 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702866 | orchestrator | 2026-04-11 00:56:32.702870 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 00:56:32.702873 | orchestrator | Saturday 11 April 2026 00:49:20 +0000 (0:00:00.266) 0:03:42.713 ******** 2026-04-11 00:56:32.702877 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702881 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702885 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702888 | orchestrator | 2026-04-11 00:56:32.702892 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 00:56:32.702896 | orchestrator | Saturday 11 April 2026 00:49:20 +0000 (0:00:00.289) 0:03:43.003 ******** 2026-04-11 00:56:32.702900 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.702903 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.702907 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.702911 | orchestrator | 2026-04-11 00:56:32.702914 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 00:56:32.702918 | orchestrator | Saturday 11 April 2026 00:49:20 +0000 (0:00:00.300) 0:03:43.303 ******** 2026-04-11 00:56:32.702922 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702926 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702929 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702933 | orchestrator | 2026-04-11 00:56:32.702937 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 00:56:32.702943 | orchestrator | Saturday 11 April 2026 00:49:21 +0000 (0:00:00.459) 0:03:43.763 ******** 2026-04-11 00:56:32.702947 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702951 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702955 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702958 | orchestrator | 2026-04-11 00:56:32.702962 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 00:56:32.702966 | orchestrator | Saturday 11 April 2026 00:49:21 +0000 (0:00:00.305) 0:03:44.068 ******** 2026-04-11 00:56:32.702970 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702973 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.702977 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.702981 | orchestrator | 2026-04-11 00:56:32.702984 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-11 00:56:32.702988 | orchestrator | Saturday 11 April 2026 00:49:22 +0000 (0:00:00.493) 0:03:44.562 ******** 2026-04-11 00:56:32.702992 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.702996 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.703000 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.703003 | orchestrator | 2026-04-11 00:56:32.703007 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-11 00:56:32.703011 | orchestrator | Saturday 11 April 2026 00:49:22 +0000 (0:00:00.284) 0:03:44.847 ******** 2026-04-11 00:56:32.703015 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.703018 | orchestrator | 2026-04-11 00:56:32.703022 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-11 00:56:32.703026 | orchestrator | Saturday 11 April 2026 00:49:22 +0000 (0:00:00.674) 0:03:45.522 ******** 2026-04-11 00:56:32.703030 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.703033 | orchestrator | 2026-04-11 00:56:32.703055 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-11 00:56:32.703062 | orchestrator | Saturday 11 April 2026 00:49:23 +0000 (0:00:00.142) 0:03:45.664 ******** 2026-04-11 00:56:32.703072 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 00:56:32.703078 | orchestrator | 2026-04-11 00:56:32.703084 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-11 00:56:32.703090 | orchestrator | Saturday 11 April 2026 00:49:24 +0000 (0:00:01.034) 0:03:46.699 ******** 2026-04-11 00:56:32.703095 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703101 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.703110 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.703117 | orchestrator | 2026-04-11 00:56:32.703122 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-11 00:56:32.703129 | orchestrator | Saturday 11 April 2026 00:49:24 +0000 (0:00:00.335) 0:03:47.035 ******** 2026-04-11 00:56:32.703135 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703141 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.703148 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.703154 | orchestrator | 2026-04-11 00:56:32.703161 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-11 00:56:32.703167 | orchestrator | Saturday 11 April 2026 00:49:24 +0000 (0:00:00.487) 0:03:47.523 ******** 2026-04-11 00:56:32.703172 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703176 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703180 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703184 | orchestrator | 2026-04-11 00:56:32.703188 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-11 00:56:32.703191 | orchestrator | Saturday 11 April 2026 00:49:26 +0000 (0:00:01.134) 0:03:48.657 ******** 2026-04-11 00:56:32.703195 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703199 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703203 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703206 | orchestrator | 2026-04-11 00:56:32.703215 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-11 00:56:32.703222 | orchestrator | Saturday 11 April 2026 00:49:26 +0000 (0:00:00.774) 0:03:49.432 ******** 2026-04-11 00:56:32.703227 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703233 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703239 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703245 | orchestrator | 2026-04-11 00:56:32.703251 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-11 00:56:32.703257 | orchestrator | Saturday 11 April 2026 00:49:27 +0000 (0:00:00.633) 0:03:50.066 ******** 2026-04-11 00:56:32.703264 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703270 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.703277 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.703281 | orchestrator | 2026-04-11 00:56:32.703285 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-11 00:56:32.703288 | orchestrator | Saturday 11 April 2026 00:49:28 +0000 (0:00:00.922) 0:03:50.988 ******** 2026-04-11 00:56:32.703292 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703296 | orchestrator | 2026-04-11 00:56:32.703300 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-11 00:56:32.703303 | orchestrator | Saturday 11 April 2026 00:49:29 +0000 (0:00:01.306) 0:03:52.294 ******** 2026-04-11 00:56:32.703307 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703311 | orchestrator | 2026-04-11 00:56:32.703315 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-11 00:56:32.703318 | orchestrator | Saturday 11 April 2026 00:49:30 +0000 (0:00:00.717) 0:03:53.011 ******** 2026-04-11 00:56:32.703322 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 00:56:32.703326 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.703330 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.703333 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 00:56:32.703337 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-11 00:56:32.703341 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 00:56:32.703345 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 00:56:32.703348 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-11 00:56:32.703352 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 00:56:32.703356 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-11 00:56:32.703359 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-11 00:56:32.703363 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-11 00:56:32.703367 | orchestrator | 2026-04-11 00:56:32.703371 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-11 00:56:32.703374 | orchestrator | Saturday 11 April 2026 00:49:33 +0000 (0:00:02.776) 0:03:55.788 ******** 2026-04-11 00:56:32.703378 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703382 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703385 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703389 | orchestrator | 2026-04-11 00:56:32.703393 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-11 00:56:32.703397 | orchestrator | Saturday 11 April 2026 00:49:34 +0000 (0:00:00.978) 0:03:56.767 ******** 2026-04-11 00:56:32.703400 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703404 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.703408 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.703412 | orchestrator | 2026-04-11 00:56:32.703416 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-11 00:56:32.703419 | orchestrator | Saturday 11 April 2026 00:49:34 +0000 (0:00:00.401) 0:03:57.168 ******** 2026-04-11 00:56:32.703423 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703427 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.703434 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.703438 | orchestrator | 2026-04-11 00:56:32.703442 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-11 00:56:32.703446 | orchestrator | Saturday 11 April 2026 00:49:35 +0000 (0:00:00.686) 0:03:57.855 ******** 2026-04-11 00:56:32.703449 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703472 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703477 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703480 | orchestrator | 2026-04-11 00:56:32.703496 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-11 00:56:32.703501 | orchestrator | Saturday 11 April 2026 00:49:37 +0000 (0:00:01.677) 0:03:59.532 ******** 2026-04-11 00:56:32.703505 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703509 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703513 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703516 | orchestrator | 2026-04-11 00:56:32.703523 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-11 00:56:32.703527 | orchestrator | Saturday 11 April 2026 00:49:38 +0000 (0:00:01.076) 0:04:00.609 ******** 2026-04-11 00:56:32.703531 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.703534 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.703538 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.703542 | orchestrator | 2026-04-11 00:56:32.703546 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-11 00:56:32.703549 | orchestrator | Saturday 11 April 2026 00:49:38 +0000 (0:00:00.272) 0:04:00.881 ******** 2026-04-11 00:56:32.703553 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.703557 | orchestrator | 2026-04-11 00:56:32.703560 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-11 00:56:32.703564 | orchestrator | Saturday 11 April 2026 00:49:39 +0000 (0:00:00.832) 0:04:01.714 ******** 2026-04-11 00:56:32.703568 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.703572 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.703575 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.703579 | orchestrator | 2026-04-11 00:56:32.703583 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-11 00:56:32.703587 | orchestrator | Saturday 11 April 2026 00:49:39 +0000 (0:00:00.382) 0:04:02.096 ******** 2026-04-11 00:56:32.703590 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.703594 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.703598 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.703601 | orchestrator | 2026-04-11 00:56:32.703605 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-11 00:56:32.703609 | orchestrator | Saturday 11 April 2026 00:49:39 +0000 (0:00:00.317) 0:04:02.414 ******** 2026-04-11 00:56:32.703613 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.703617 | orchestrator | 2026-04-11 00:56:32.703620 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-11 00:56:32.703624 | orchestrator | Saturday 11 April 2026 00:49:40 +0000 (0:00:00.778) 0:04:03.193 ******** 2026-04-11 00:56:32.703628 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703631 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703635 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703639 | orchestrator | 2026-04-11 00:56:32.703643 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-11 00:56:32.703647 | orchestrator | Saturday 11 April 2026 00:49:42 +0000 (0:00:01.534) 0:04:04.728 ******** 2026-04-11 00:56:32.703650 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703654 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703658 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703661 | orchestrator | 2026-04-11 00:56:32.703665 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-11 00:56:32.703672 | orchestrator | Saturday 11 April 2026 00:49:43 +0000 (0:00:00.946) 0:04:05.674 ******** 2026-04-11 00:56:32.703676 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703679 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703683 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703687 | orchestrator | 2026-04-11 00:56:32.703690 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-11 00:56:32.703694 | orchestrator | Saturday 11 April 2026 00:49:44 +0000 (0:00:01.698) 0:04:07.372 ******** 2026-04-11 00:56:32.703698 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.703702 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.703705 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.703709 | orchestrator | 2026-04-11 00:56:32.703713 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-11 00:56:32.703716 | orchestrator | Saturday 11 April 2026 00:49:47 +0000 (0:00:02.317) 0:04:09.690 ******** 2026-04-11 00:56:32.703720 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.703724 | orchestrator | 2026-04-11 00:56:32.703728 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-11 00:56:32.703731 | orchestrator | Saturday 11 April 2026 00:49:47 +0000 (0:00:00.705) 0:04:10.396 ******** 2026-04-11 00:56:32.703735 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-11 00:56:32.703739 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703742 | orchestrator | 2026-04-11 00:56:32.703746 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-11 00:56:32.703750 | orchestrator | Saturday 11 April 2026 00:50:10 +0000 (0:00:22.195) 0:04:32.592 ******** 2026-04-11 00:56:32.703754 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703761 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.703768 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.703775 | orchestrator | 2026-04-11 00:56:32.703784 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-11 00:56:32.703791 | orchestrator | Saturday 11 April 2026 00:50:19 +0000 (0:00:09.185) 0:04:41.777 ******** 2026-04-11 00:56:32.703796 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.703803 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.703809 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.703815 | orchestrator | 2026-04-11 00:56:32.703822 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-11 00:56:32.703848 | orchestrator | Saturday 11 April 2026 00:50:19 +0000 (0:00:00.397) 0:04:42.175 ******** 2026-04-11 00:56:32.703856 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2930cb3be1a576fe4dce5572c7a6ef3ad36b5722'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-11 00:56:32.703861 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2930cb3be1a576fe4dce5572c7a6ef3ad36b5722'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-11 00:56:32.703866 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2930cb3be1a576fe4dce5572c7a6ef3ad36b5722'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-11 00:56:32.703871 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2930cb3be1a576fe4dce5572c7a6ef3ad36b5722'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-11 00:56:32.703880 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2930cb3be1a576fe4dce5572c7a6ef3ad36b5722'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-11 00:56:32.703885 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2930cb3be1a576fe4dce5572c7a6ef3ad36b5722'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2930cb3be1a576fe4dce5572c7a6ef3ad36b5722'}])  2026-04-11 00:56:32.703889 | orchestrator | 2026-04-11 00:56:32.703893 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 00:56:32.703897 | orchestrator | Saturday 11 April 2026 00:50:34 +0000 (0:00:14.780) 0:04:56.956 ******** 2026-04-11 00:56:32.703901 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.703904 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.703908 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.703912 | orchestrator | 2026-04-11 00:56:32.703916 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 00:56:32.703919 | orchestrator | Saturday 11 April 2026 00:50:34 +0000 (0:00:00.306) 0:04:57.263 ******** 2026-04-11 00:56:32.703923 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.703927 | orchestrator | 2026-04-11 00:56:32.703931 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-11 00:56:32.703934 | orchestrator | Saturday 11 April 2026 00:50:35 +0000 (0:00:00.766) 0:04:58.029 ******** 2026-04-11 00:56:32.703938 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.703942 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.703945 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.703949 | orchestrator | 2026-04-11 00:56:32.703953 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-11 00:56:32.703957 | orchestrator | Saturday 11 April 2026 00:50:35 +0000 (0:00:00.307) 0:04:58.336 ******** 2026-04-11 00:56:32.703960 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.703964 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.703968 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.703972 | orchestrator | 2026-04-11 00:56:32.703975 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-11 00:56:32.703979 | orchestrator | Saturday 11 April 2026 00:50:36 +0000 (0:00:00.317) 0:04:58.653 ******** 2026-04-11 00:56:32.703983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 00:56:32.703987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 00:56:32.703990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 00:56:32.703994 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.703998 | orchestrator | 2026-04-11 00:56:32.704004 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-11 00:56:32.704010 | orchestrator | Saturday 11 April 2026 00:50:36 +0000 (0:00:00.804) 0:04:59.458 ******** 2026-04-11 00:56:32.704019 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704026 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704051 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704058 | orchestrator | 2026-04-11 00:56:32.704063 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-11 00:56:32.704074 | orchestrator | 2026-04-11 00:56:32.704080 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 00:56:32.704086 | orchestrator | Saturday 11 April 2026 00:50:37 +0000 (0:00:00.806) 0:05:00.264 ******** 2026-04-11 00:56:32.704095 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.704102 | orchestrator | 2026-04-11 00:56:32.704108 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 00:56:32.704114 | orchestrator | Saturday 11 April 2026 00:50:38 +0000 (0:00:00.467) 0:05:00.732 ******** 2026-04-11 00:56:32.704121 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.704127 | orchestrator | 2026-04-11 00:56:32.704134 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 00:56:32.704140 | orchestrator | Saturday 11 April 2026 00:50:38 +0000 (0:00:00.714) 0:05:01.446 ******** 2026-04-11 00:56:32.704146 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704153 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704160 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704169 | orchestrator | 2026-04-11 00:56:32.704176 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 00:56:32.704181 | orchestrator | Saturday 11 April 2026 00:50:39 +0000 (0:00:00.810) 0:05:02.257 ******** 2026-04-11 00:56:32.704187 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704193 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704199 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704204 | orchestrator | 2026-04-11 00:56:32.704209 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 00:56:32.704215 | orchestrator | Saturday 11 April 2026 00:50:40 +0000 (0:00:00.334) 0:05:02.591 ******** 2026-04-11 00:56:32.704221 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704227 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704233 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704239 | orchestrator | 2026-04-11 00:56:32.704246 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 00:56:32.704252 | orchestrator | Saturday 11 April 2026 00:50:40 +0000 (0:00:00.334) 0:05:02.925 ******** 2026-04-11 00:56:32.704259 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704263 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704267 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704271 | orchestrator | 2026-04-11 00:56:32.704275 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 00:56:32.704279 | orchestrator | Saturday 11 April 2026 00:50:40 +0000 (0:00:00.549) 0:05:03.475 ******** 2026-04-11 00:56:32.704282 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704286 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704290 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704294 | orchestrator | 2026-04-11 00:56:32.704297 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 00:56:32.704301 | orchestrator | Saturday 11 April 2026 00:50:41 +0000 (0:00:00.820) 0:05:04.295 ******** 2026-04-11 00:56:32.704305 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704309 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704312 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704316 | orchestrator | 2026-04-11 00:56:32.704320 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 00:56:32.704324 | orchestrator | Saturday 11 April 2026 00:50:42 +0000 (0:00:00.284) 0:05:04.579 ******** 2026-04-11 00:56:32.704327 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704331 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704335 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704338 | orchestrator | 2026-04-11 00:56:32.704342 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 00:56:32.704350 | orchestrator | Saturday 11 April 2026 00:50:42 +0000 (0:00:00.296) 0:05:04.875 ******** 2026-04-11 00:56:32.704354 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704358 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704361 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704365 | orchestrator | 2026-04-11 00:56:32.704369 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 00:56:32.704373 | orchestrator | Saturday 11 April 2026 00:50:43 +0000 (0:00:01.190) 0:05:06.066 ******** 2026-04-11 00:56:32.704377 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704380 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704384 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704388 | orchestrator | 2026-04-11 00:56:32.704392 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 00:56:32.704396 | orchestrator | Saturday 11 April 2026 00:50:44 +0000 (0:00:00.734) 0:05:06.800 ******** 2026-04-11 00:56:32.704399 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704403 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704407 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704411 | orchestrator | 2026-04-11 00:56:32.704414 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 00:56:32.704418 | orchestrator | Saturday 11 April 2026 00:50:44 +0000 (0:00:00.290) 0:05:07.091 ******** 2026-04-11 00:56:32.704422 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704426 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704430 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704433 | orchestrator | 2026-04-11 00:56:32.704437 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 00:56:32.704441 | orchestrator | Saturday 11 April 2026 00:50:44 +0000 (0:00:00.407) 0:05:07.499 ******** 2026-04-11 00:56:32.704445 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704449 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704452 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704456 | orchestrator | 2026-04-11 00:56:32.704460 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 00:56:32.704483 | orchestrator | Saturday 11 April 2026 00:50:45 +0000 (0:00:00.598) 0:05:08.097 ******** 2026-04-11 00:56:32.704515 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704518 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704522 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704526 | orchestrator | 2026-04-11 00:56:32.704530 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 00:56:32.704534 | orchestrator | Saturday 11 April 2026 00:50:45 +0000 (0:00:00.289) 0:05:08.386 ******** 2026-04-11 00:56:32.704538 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704541 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704545 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704549 | orchestrator | 2026-04-11 00:56:32.704555 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 00:56:32.704559 | orchestrator | Saturday 11 April 2026 00:50:46 +0000 (0:00:00.317) 0:05:08.704 ******** 2026-04-11 00:56:32.704563 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704567 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704571 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704574 | orchestrator | 2026-04-11 00:56:32.704578 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 00:56:32.704582 | orchestrator | Saturday 11 April 2026 00:50:46 +0000 (0:00:00.289) 0:05:08.993 ******** 2026-04-11 00:56:32.704586 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704589 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704593 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704597 | orchestrator | 2026-04-11 00:56:32.704601 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 00:56:32.704604 | orchestrator | Saturday 11 April 2026 00:50:46 +0000 (0:00:00.284) 0:05:09.277 ******** 2026-04-11 00:56:32.704611 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704615 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704619 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704623 | orchestrator | 2026-04-11 00:56:32.704627 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 00:56:32.704630 | orchestrator | Saturday 11 April 2026 00:50:47 +0000 (0:00:00.590) 0:05:09.868 ******** 2026-04-11 00:56:32.704634 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704638 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704642 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704645 | orchestrator | 2026-04-11 00:56:32.704649 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 00:56:32.704653 | orchestrator | Saturday 11 April 2026 00:50:47 +0000 (0:00:00.319) 0:05:10.188 ******** 2026-04-11 00:56:32.704657 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704660 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704665 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704671 | orchestrator | 2026-04-11 00:56:32.704678 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-11 00:56:32.704687 | orchestrator | Saturday 11 April 2026 00:50:48 +0000 (0:00:00.535) 0:05:10.723 ******** 2026-04-11 00:56:32.704695 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 00:56:32.704701 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:56:32.704707 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:56:32.704714 | orchestrator | 2026-04-11 00:56:32.704720 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-11 00:56:32.704727 | orchestrator | Saturday 11 April 2026 00:50:49 +0000 (0:00:00.854) 0:05:11.578 ******** 2026-04-11 00:56:32.704734 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.704742 | orchestrator | 2026-04-11 00:56:32.704748 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-11 00:56:32.704754 | orchestrator | Saturday 11 April 2026 00:50:49 +0000 (0:00:00.728) 0:05:12.306 ******** 2026-04-11 00:56:32.704758 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.704761 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.704765 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.704769 | orchestrator | 2026-04-11 00:56:32.704773 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-11 00:56:32.704776 | orchestrator | Saturday 11 April 2026 00:50:50 +0000 (0:00:00.638) 0:05:12.945 ******** 2026-04-11 00:56:32.704780 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704784 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704788 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704791 | orchestrator | 2026-04-11 00:56:32.704795 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-11 00:56:32.704799 | orchestrator | Saturday 11 April 2026 00:50:50 +0000 (0:00:00.269) 0:05:13.214 ******** 2026-04-11 00:56:32.704803 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 00:56:32.704806 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 00:56:32.704810 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 00:56:32.704814 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-11 00:56:32.704818 | orchestrator | 2026-04-11 00:56:32.704822 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-11 00:56:32.704825 | orchestrator | Saturday 11 April 2026 00:51:01 +0000 (0:00:10.663) 0:05:23.877 ******** 2026-04-11 00:56:32.704829 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704833 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704836 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704840 | orchestrator | 2026-04-11 00:56:32.704844 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-11 00:56:32.704851 | orchestrator | Saturday 11 April 2026 00:51:01 +0000 (0:00:00.299) 0:05:24.176 ******** 2026-04-11 00:56:32.704855 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-11 00:56:32.704859 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 00:56:32.704863 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 00:56:32.704867 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.704871 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 00:56:32.704891 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.704895 | orchestrator | 2026-04-11 00:56:32.704899 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-11 00:56:32.704903 | orchestrator | Saturday 11 April 2026 00:51:03 +0000 (0:00:02.334) 0:05:26.511 ******** 2026-04-11 00:56:32.704907 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-11 00:56:32.704911 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 00:56:32.704914 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 00:56:32.704922 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 00:56:32.704925 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-11 00:56:32.704929 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-11 00:56:32.704933 | orchestrator | 2026-04-11 00:56:32.704937 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-11 00:56:32.704940 | orchestrator | Saturday 11 April 2026 00:51:05 +0000 (0:00:01.349) 0:05:27.861 ******** 2026-04-11 00:56:32.704944 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.704948 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.704952 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.704955 | orchestrator | 2026-04-11 00:56:32.704959 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-11 00:56:32.704963 | orchestrator | Saturday 11 April 2026 00:51:05 +0000 (0:00:00.626) 0:05:28.487 ******** 2026-04-11 00:56:32.704967 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704970 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704974 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.704978 | orchestrator | 2026-04-11 00:56:32.704982 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-11 00:56:32.704985 | orchestrator | Saturday 11 April 2026 00:51:06 +0000 (0:00:00.438) 0:05:28.926 ******** 2026-04-11 00:56:32.704989 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.704993 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.704997 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.705000 | orchestrator | 2026-04-11 00:56:32.705004 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-11 00:56:32.705008 | orchestrator | Saturday 11 April 2026 00:51:06 +0000 (0:00:00.270) 0:05:29.196 ******** 2026-04-11 00:56:32.705012 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.705016 | orchestrator | 2026-04-11 00:56:32.705019 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-11 00:56:32.705023 | orchestrator | Saturday 11 April 2026 00:51:07 +0000 (0:00:00.454) 0:05:29.651 ******** 2026-04-11 00:56:32.705027 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.705031 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.705034 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.705038 | orchestrator | 2026-04-11 00:56:32.705042 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-11 00:56:32.705046 | orchestrator | Saturday 11 April 2026 00:51:07 +0000 (0:00:00.420) 0:05:30.072 ******** 2026-04-11 00:56:32.705049 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.705053 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.705057 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.705061 | orchestrator | 2026-04-11 00:56:32.705064 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-11 00:56:32.705071 | orchestrator | Saturday 11 April 2026 00:51:07 +0000 (0:00:00.282) 0:05:30.355 ******** 2026-04-11 00:56:32.705075 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.705079 | orchestrator | 2026-04-11 00:56:32.705082 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-11 00:56:32.705086 | orchestrator | Saturday 11 April 2026 00:51:08 +0000 (0:00:00.450) 0:05:30.805 ******** 2026-04-11 00:56:32.705090 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.705094 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.705097 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.705101 | orchestrator | 2026-04-11 00:56:32.705105 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-11 00:56:32.705109 | orchestrator | Saturday 11 April 2026 00:51:09 +0000 (0:00:01.300) 0:05:32.106 ******** 2026-04-11 00:56:32.705112 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.705116 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.705120 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.705124 | orchestrator | 2026-04-11 00:56:32.705128 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-11 00:56:32.705131 | orchestrator | Saturday 11 April 2026 00:51:10 +0000 (0:00:01.124) 0:05:33.230 ******** 2026-04-11 00:56:32.705135 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.705139 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.705142 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.705146 | orchestrator | 2026-04-11 00:56:32.705150 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-11 00:56:32.705154 | orchestrator | Saturday 11 April 2026 00:51:12 +0000 (0:00:01.785) 0:05:35.015 ******** 2026-04-11 00:56:32.705158 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.705161 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.705165 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.705169 | orchestrator | 2026-04-11 00:56:32.705172 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-11 00:56:32.705176 | orchestrator | Saturday 11 April 2026 00:51:14 +0000 (0:00:01.798) 0:05:36.814 ******** 2026-04-11 00:56:32.705180 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.705184 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.705187 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-11 00:56:32.705191 | orchestrator | 2026-04-11 00:56:32.705195 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-11 00:56:32.705199 | orchestrator | Saturday 11 April 2026 00:51:14 +0000 (0:00:00.484) 0:05:37.298 ******** 2026-04-11 00:56:32.705214 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-11 00:56:32.705218 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-11 00:56:32.705222 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-11 00:56:32.705226 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-11 00:56:32.705232 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-04-11 00:56:32.705236 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-04-11 00:56:32.705239 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (24 retries left). 2026-04-11 00:56:32.705243 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.705247 | orchestrator | 2026-04-11 00:56:32.705251 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-11 00:56:32.705257 | orchestrator | Saturday 11 April 2026 00:51:56 +0000 (0:00:42.010) 0:06:19.309 ******** 2026-04-11 00:56:32.705261 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.705265 | orchestrator | 2026-04-11 00:56:32.705269 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-11 00:56:32.705272 | orchestrator | Saturday 11 April 2026 00:51:58 +0000 (0:00:01.230) 0:06:20.540 ******** 2026-04-11 00:56:32.705276 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.705280 | orchestrator | 2026-04-11 00:56:32.705284 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-11 00:56:32.705287 | orchestrator | Saturday 11 April 2026 00:51:58 +0000 (0:00:00.272) 0:06:20.813 ******** 2026-04-11 00:56:32.705291 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.705295 | orchestrator | 2026-04-11 00:56:32.705299 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-11 00:56:32.705302 | orchestrator | Saturday 11 April 2026 00:51:58 +0000 (0:00:00.126) 0:06:20.940 ******** 2026-04-11 00:56:32.705306 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-11 00:56:32.705310 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-11 00:56:32.705314 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-11 00:56:32.705318 | orchestrator | 2026-04-11 00:56:32.705321 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-11 00:56:32.705325 | orchestrator | Saturday 11 April 2026 00:52:04 +0000 (0:00:06.290) 0:06:27.230 ******** 2026-04-11 00:56:32.705329 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-11 00:56:32.705333 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-11 00:56:32.705336 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-11 00:56:32.705340 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-11 00:56:32.705344 | orchestrator | 2026-04-11 00:56:32.705348 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 00:56:32.705351 | orchestrator | Saturday 11 April 2026 00:52:09 +0000 (0:00:04.963) 0:06:32.193 ******** 2026-04-11 00:56:32.705355 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.705359 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.705363 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.705367 | orchestrator | 2026-04-11 00:56:32.705370 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 00:56:32.705374 | orchestrator | Saturday 11 April 2026 00:52:10 +0000 (0:00:00.717) 0:06:32.911 ******** 2026-04-11 00:56:32.705378 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.705382 | orchestrator | 2026-04-11 00:56:32.705385 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-11 00:56:32.705389 | orchestrator | Saturday 11 April 2026 00:52:10 +0000 (0:00:00.515) 0:06:33.427 ******** 2026-04-11 00:56:32.705393 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.705397 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.705401 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.705404 | orchestrator | 2026-04-11 00:56:32.705408 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-11 00:56:32.705412 | orchestrator | Saturday 11 April 2026 00:52:11 +0000 (0:00:00.319) 0:06:33.747 ******** 2026-04-11 00:56:32.705415 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.705419 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.705423 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.705427 | orchestrator | 2026-04-11 00:56:32.705430 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-11 00:56:32.705434 | orchestrator | Saturday 11 April 2026 00:52:12 +0000 (0:00:01.596) 0:06:35.343 ******** 2026-04-11 00:56:32.705438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 00:56:32.705444 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 00:56:32.705448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 00:56:32.705452 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.705455 | orchestrator | 2026-04-11 00:56:32.705459 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-11 00:56:32.705463 | orchestrator | Saturday 11 April 2026 00:52:13 +0000 (0:00:00.580) 0:06:35.924 ******** 2026-04-11 00:56:32.705467 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.705470 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.705474 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.705478 | orchestrator | 2026-04-11 00:56:32.705504 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-11 00:56:32.705509 | orchestrator | 2026-04-11 00:56:32.705513 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 00:56:32.705517 | orchestrator | Saturday 11 April 2026 00:52:13 +0000 (0:00:00.555) 0:06:36.479 ******** 2026-04-11 00:56:32.705521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.705525 | orchestrator | 2026-04-11 00:56:32.705529 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 00:56:32.705534 | orchestrator | Saturday 11 April 2026 00:52:14 +0000 (0:00:00.734) 0:06:37.214 ******** 2026-04-11 00:56:32.705538 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.705542 | orchestrator | 2026-04-11 00:56:32.705546 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 00:56:32.705550 | orchestrator | Saturday 11 April 2026 00:52:15 +0000 (0:00:00.625) 0:06:37.840 ******** 2026-04-11 00:56:32.705554 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705557 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705561 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705565 | orchestrator | 2026-04-11 00:56:32.705569 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 00:56:32.705572 | orchestrator | Saturday 11 April 2026 00:52:15 +0000 (0:00:00.535) 0:06:38.375 ******** 2026-04-11 00:56:32.705576 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705580 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705584 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705587 | orchestrator | 2026-04-11 00:56:32.705591 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 00:56:32.705595 | orchestrator | Saturday 11 April 2026 00:52:16 +0000 (0:00:00.704) 0:06:39.080 ******** 2026-04-11 00:56:32.705599 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705602 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705606 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705610 | orchestrator | 2026-04-11 00:56:32.705614 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 00:56:32.705618 | orchestrator | Saturday 11 April 2026 00:52:17 +0000 (0:00:00.702) 0:06:39.783 ******** 2026-04-11 00:56:32.705621 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705625 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705629 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705632 | orchestrator | 2026-04-11 00:56:32.705636 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 00:56:32.705640 | orchestrator | Saturday 11 April 2026 00:52:18 +0000 (0:00:00.794) 0:06:40.578 ******** 2026-04-11 00:56:32.705644 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705647 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705651 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705655 | orchestrator | 2026-04-11 00:56:32.705659 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 00:56:32.705662 | orchestrator | Saturday 11 April 2026 00:52:18 +0000 (0:00:00.463) 0:06:41.042 ******** 2026-04-11 00:56:32.705669 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705673 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705677 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705681 | orchestrator | 2026-04-11 00:56:32.705684 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 00:56:32.705688 | orchestrator | Saturday 11 April 2026 00:52:18 +0000 (0:00:00.275) 0:06:41.317 ******** 2026-04-11 00:56:32.705692 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705696 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705699 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705703 | orchestrator | 2026-04-11 00:56:32.705707 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 00:56:32.705711 | orchestrator | Saturday 11 April 2026 00:52:19 +0000 (0:00:00.268) 0:06:41.586 ******** 2026-04-11 00:56:32.705714 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705718 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705722 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705726 | orchestrator | 2026-04-11 00:56:32.705729 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 00:56:32.705733 | orchestrator | Saturday 11 April 2026 00:52:19 +0000 (0:00:00.691) 0:06:42.277 ******** 2026-04-11 00:56:32.705737 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705741 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705744 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705748 | orchestrator | 2026-04-11 00:56:32.705752 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 00:56:32.705756 | orchestrator | Saturday 11 April 2026 00:52:20 +0000 (0:00:00.689) 0:06:42.967 ******** 2026-04-11 00:56:32.705760 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705763 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705767 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705771 | orchestrator | 2026-04-11 00:56:32.705775 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 00:56:32.705778 | orchestrator | Saturday 11 April 2026 00:52:20 +0000 (0:00:00.443) 0:06:43.411 ******** 2026-04-11 00:56:32.705782 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705786 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705789 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705793 | orchestrator | 2026-04-11 00:56:32.705797 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 00:56:32.705801 | orchestrator | Saturday 11 April 2026 00:52:21 +0000 (0:00:00.270) 0:06:43.681 ******** 2026-04-11 00:56:32.705804 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705808 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705812 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705816 | orchestrator | 2026-04-11 00:56:32.705819 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 00:56:32.705823 | orchestrator | Saturday 11 April 2026 00:52:21 +0000 (0:00:00.266) 0:06:43.948 ******** 2026-04-11 00:56:32.705827 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705831 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705834 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705838 | orchestrator | 2026-04-11 00:56:32.705844 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 00:56:32.705848 | orchestrator | Saturday 11 April 2026 00:52:21 +0000 (0:00:00.249) 0:06:44.198 ******** 2026-04-11 00:56:32.705852 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705855 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705859 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705863 | orchestrator | 2026-04-11 00:56:32.705867 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 00:56:32.705873 | orchestrator | Saturday 11 April 2026 00:52:22 +0000 (0:00:00.485) 0:06:44.683 ******** 2026-04-11 00:56:32.705877 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705885 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705889 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705893 | orchestrator | 2026-04-11 00:56:32.705897 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 00:56:32.705901 | orchestrator | Saturday 11 April 2026 00:52:22 +0000 (0:00:00.271) 0:06:44.954 ******** 2026-04-11 00:56:32.705904 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705908 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705912 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705916 | orchestrator | 2026-04-11 00:56:32.705922 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 00:56:32.705928 | orchestrator | Saturday 11 April 2026 00:52:22 +0000 (0:00:00.274) 0:06:45.228 ******** 2026-04-11 00:56:32.705935 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.705944 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.705949 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.705955 | orchestrator | 2026-04-11 00:56:32.705962 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 00:56:32.705968 | orchestrator | Saturday 11 April 2026 00:52:22 +0000 (0:00:00.269) 0:06:45.498 ******** 2026-04-11 00:56:32.705973 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.705979 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.705985 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.705992 | orchestrator | 2026-04-11 00:56:32.705998 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 00:56:32.706005 | orchestrator | Saturday 11 April 2026 00:52:23 +0000 (0:00:00.496) 0:06:45.994 ******** 2026-04-11 00:56:32.706011 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.706038 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.706042 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.706046 | orchestrator | 2026-04-11 00:56:32.706050 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-11 00:56:32.706054 | orchestrator | Saturday 11 April 2026 00:52:23 +0000 (0:00:00.528) 0:06:46.523 ******** 2026-04-11 00:56:32.706058 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.706061 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.706065 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.706069 | orchestrator | 2026-04-11 00:56:32.706073 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-11 00:56:32.706076 | orchestrator | Saturday 11 April 2026 00:52:24 +0000 (0:00:00.251) 0:06:46.774 ******** 2026-04-11 00:56:32.706080 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:56:32.706084 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:56:32.706088 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:56:32.706091 | orchestrator | 2026-04-11 00:56:32.706095 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-11 00:56:32.706099 | orchestrator | Saturday 11 April 2026 00:52:25 +0000 (0:00:00.901) 0:06:47.675 ******** 2026-04-11 00:56:32.706103 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-04-11 00:56:32.706106 | orchestrator | 2026-04-11 00:56:32.706110 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-11 00:56:32.706114 | orchestrator | Saturday 11 April 2026 00:52:25 +0000 (0:00:00.500) 0:06:48.176 ******** 2026-04-11 00:56:32.706118 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706121 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706125 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.706129 | orchestrator | 2026-04-11 00:56:32.706133 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-11 00:56:32.706137 | orchestrator | Saturday 11 April 2026 00:52:25 +0000 (0:00:00.255) 0:06:48.431 ******** 2026-04-11 00:56:32.706140 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706148 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706152 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.706156 | orchestrator | 2026-04-11 00:56:32.706160 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-11 00:56:32.706164 | orchestrator | Saturday 11 April 2026 00:52:26 +0000 (0:00:00.402) 0:06:48.834 ******** 2026-04-11 00:56:32.706167 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.706171 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.706175 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.706179 | orchestrator | 2026-04-11 00:56:32.706182 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-11 00:56:32.706186 | orchestrator | Saturday 11 April 2026 00:52:26 +0000 (0:00:00.634) 0:06:49.469 ******** 2026-04-11 00:56:32.706190 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.706193 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.706197 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.706201 | orchestrator | 2026-04-11 00:56:32.706205 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-11 00:56:32.706208 | orchestrator | Saturday 11 April 2026 00:52:27 +0000 (0:00:00.296) 0:06:49.765 ******** 2026-04-11 00:56:32.706212 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 00:56:32.706216 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 00:56:32.706224 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 00:56:32.706228 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 00:56:32.706232 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 00:56:32.706236 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 00:56:32.706239 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 00:56:32.706245 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 00:56:32.706249 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 00:56:32.706253 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 00:56:32.706257 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 00:56:32.706261 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 00:56:32.706264 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 00:56:32.706268 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 00:56:32.706272 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 00:56:32.706276 | orchestrator | 2026-04-11 00:56:32.706279 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-11 00:56:32.706283 | orchestrator | Saturday 11 April 2026 00:52:31 +0000 (0:00:04.168) 0:06:53.934 ******** 2026-04-11 00:56:32.706287 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706291 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706294 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.706298 | orchestrator | 2026-04-11 00:56:32.706302 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-11 00:56:32.706305 | orchestrator | Saturday 11 April 2026 00:52:31 +0000 (0:00:00.435) 0:06:54.370 ******** 2026-04-11 00:56:32.706309 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.706313 | orchestrator | 2026-04-11 00:56:32.706317 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-11 00:56:32.706323 | orchestrator | Saturday 11 April 2026 00:52:32 +0000 (0:00:00.436) 0:06:54.806 ******** 2026-04-11 00:56:32.706327 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 00:56:32.706331 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 00:56:32.706335 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 00:56:32.706338 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-11 00:56:32.706342 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-11 00:56:32.706346 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-11 00:56:32.706352 | orchestrator | 2026-04-11 00:56:32.706358 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-11 00:56:32.706367 | orchestrator | Saturday 11 April 2026 00:52:33 +0000 (0:00:00.991) 0:06:55.798 ******** 2026-04-11 00:56:32.706374 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.706380 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 00:56:32.706386 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 00:56:32.706391 | orchestrator | 2026-04-11 00:56:32.706397 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-11 00:56:32.706403 | orchestrator | Saturday 11 April 2026 00:52:35 +0000 (0:00:02.278) 0:06:58.076 ******** 2026-04-11 00:56:32.706409 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 00:56:32.706415 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 00:56:32.706420 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 00:56:32.706427 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.706433 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 00:56:32.706439 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.706446 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 00:56:32.706453 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 00:56:32.706459 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.706465 | orchestrator | 2026-04-11 00:56:32.706472 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-11 00:56:32.706478 | orchestrator | Saturday 11 April 2026 00:52:36 +0000 (0:00:01.397) 0:06:59.474 ******** 2026-04-11 00:56:32.706493 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.706499 | orchestrator | 2026-04-11 00:56:32.706505 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-11 00:56:32.706511 | orchestrator | Saturday 11 April 2026 00:52:39 +0000 (0:00:02.125) 0:07:01.599 ******** 2026-04-11 00:56:32.706518 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.706524 | orchestrator | 2026-04-11 00:56:32.706530 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-11 00:56:32.706536 | orchestrator | Saturday 11 April 2026 00:52:39 +0000 (0:00:00.446) 0:07:02.046 ******** 2026-04-11 00:56:32.706546 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e341cbc8-c5ec-51c3-80df-3957b9d9208d', 'data_vg': 'ceph-e341cbc8-c5ec-51c3-80df-3957b9d9208d'}) 2026-04-11 00:56:32.706557 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3fdd62a-f314-5cc9-917d-dd1b13b97b05', 'data_vg': 'ceph-b3fdd62a-f314-5cc9-917d-dd1b13b97b05'}) 2026-04-11 00:56:32.706564 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-55dbb7e6-b9f3-5799-9073-e4231f2b06a1', 'data_vg': 'ceph-55dbb7e6-b9f3-5799-9073-e4231f2b06a1'}) 2026-04-11 00:56:32.706575 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-11d3af0d-f4a7-5343-8eab-aab153531328', 'data_vg': 'ceph-11d3af0d-f4a7-5343-8eab-aab153531328'}) 2026-04-11 00:56:32.706579 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1d143ac2-4627-58f1-a5c6-72fac7fe3e62', 'data_vg': 'ceph-1d143ac2-4627-58f1-a5c6-72fac7fe3e62'}) 2026-04-11 00:56:32.706589 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-296a8e73-f310-5078-85ef-ed795f213448', 'data_vg': 'ceph-296a8e73-f310-5078-85ef-ed795f213448'}) 2026-04-11 00:56:32.706593 | orchestrator | 2026-04-11 00:56:32.706596 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-11 00:56:32.706600 | orchestrator | Saturday 11 April 2026 00:53:23 +0000 (0:00:44.356) 0:07:46.402 ******** 2026-04-11 00:56:32.706604 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706608 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706611 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.706615 | orchestrator | 2026-04-11 00:56:32.706619 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-11 00:56:32.706623 | orchestrator | Saturday 11 April 2026 00:53:24 +0000 (0:00:00.420) 0:07:46.823 ******** 2026-04-11 00:56:32.706626 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.706630 | orchestrator | 2026-04-11 00:56:32.706634 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-11 00:56:32.706638 | orchestrator | Saturday 11 April 2026 00:53:24 +0000 (0:00:00.451) 0:07:47.274 ******** 2026-04-11 00:56:32.706641 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.706645 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.706649 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.706653 | orchestrator | 2026-04-11 00:56:32.706656 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-11 00:56:32.706660 | orchestrator | Saturday 11 April 2026 00:53:25 +0000 (0:00:00.750) 0:07:48.025 ******** 2026-04-11 00:56:32.706664 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.706667 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.706671 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.706675 | orchestrator | 2026-04-11 00:56:32.706679 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-11 00:56:32.706682 | orchestrator | Saturday 11 April 2026 00:53:28 +0000 (0:00:02.990) 0:07:51.016 ******** 2026-04-11 00:56:32.706686 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.706690 | orchestrator | 2026-04-11 00:56:32.706694 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-11 00:56:32.706697 | orchestrator | Saturday 11 April 2026 00:53:28 +0000 (0:00:00.440) 0:07:51.456 ******** 2026-04-11 00:56:32.706701 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.706705 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.706709 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.706712 | orchestrator | 2026-04-11 00:56:32.706716 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-11 00:56:32.706720 | orchestrator | Saturday 11 April 2026 00:53:30 +0000 (0:00:01.153) 0:07:52.610 ******** 2026-04-11 00:56:32.706723 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.706727 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.706731 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.706734 | orchestrator | 2026-04-11 00:56:32.706738 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-11 00:56:32.706742 | orchestrator | Saturday 11 April 2026 00:53:31 +0000 (0:00:01.432) 0:07:54.042 ******** 2026-04-11 00:56:32.706746 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.706749 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.706753 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.706757 | orchestrator | 2026-04-11 00:56:32.706761 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-11 00:56:32.706764 | orchestrator | Saturday 11 April 2026 00:53:33 +0000 (0:00:01.833) 0:07:55.876 ******** 2026-04-11 00:56:32.706768 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706772 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706775 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.706781 | orchestrator | 2026-04-11 00:56:32.706785 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-11 00:56:32.706789 | orchestrator | Saturday 11 April 2026 00:53:33 +0000 (0:00:00.307) 0:07:56.184 ******** 2026-04-11 00:56:32.706793 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706797 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706800 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.706804 | orchestrator | 2026-04-11 00:56:32.706808 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-11 00:56:32.706812 | orchestrator | Saturday 11 April 2026 00:53:33 +0000 (0:00:00.265) 0:07:56.449 ******** 2026-04-11 00:56:32.706815 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-11 00:56:32.706819 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 00:56:32.706823 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-11 00:56:32.706827 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-11 00:56:32.706830 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-11 00:56:32.706834 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-04-11 00:56:32.706838 | orchestrator | 2026-04-11 00:56:32.706841 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-11 00:56:32.706845 | orchestrator | Saturday 11 April 2026 00:53:35 +0000 (0:00:01.246) 0:07:57.696 ******** 2026-04-11 00:56:32.706851 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-11 00:56:32.706855 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-11 00:56:32.706859 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-11 00:56:32.706863 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-11 00:56:32.706867 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-11 00:56:32.706870 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-11 00:56:32.706874 | orchestrator | 2026-04-11 00:56:32.706878 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-11 00:56:32.706884 | orchestrator | Saturday 11 April 2026 00:53:37 +0000 (0:00:02.216) 0:07:59.912 ******** 2026-04-11 00:56:32.706888 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-11 00:56:32.706891 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-11 00:56:32.706895 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-11 00:56:32.706899 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-11 00:56:32.706902 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-11 00:56:32.706906 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-11 00:56:32.706910 | orchestrator | 2026-04-11 00:56:32.706914 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-11 00:56:32.706918 | orchestrator | Saturday 11 April 2026 00:53:40 +0000 (0:00:03.538) 0:08:03.451 ******** 2026-04-11 00:56:32.706921 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706926 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706932 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.706938 | orchestrator | 2026-04-11 00:56:32.706944 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-11 00:56:32.706950 | orchestrator | Saturday 11 April 2026 00:53:43 +0000 (0:00:02.542) 0:08:05.993 ******** 2026-04-11 00:56:32.706957 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706961 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706965 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-11 00:56:32.706969 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.706973 | orchestrator | 2026-04-11 00:56:32.706977 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-11 00:56:32.706980 | orchestrator | Saturday 11 April 2026 00:53:56 +0000 (0:00:12.561) 0:08:18.555 ******** 2026-04-11 00:56:32.706984 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.706988 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.706994 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.706998 | orchestrator | 2026-04-11 00:56:32.707002 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 00:56:32.707006 | orchestrator | Saturday 11 April 2026 00:53:56 +0000 (0:00:00.736) 0:08:19.292 ******** 2026-04-11 00:56:32.707010 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707013 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707017 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707021 | orchestrator | 2026-04-11 00:56:32.707024 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 00:56:32.707028 | orchestrator | Saturday 11 April 2026 00:53:57 +0000 (0:00:00.439) 0:08:19.732 ******** 2026-04-11 00:56:32.707032 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.707036 | orchestrator | 2026-04-11 00:56:32.707039 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-11 00:56:32.707043 | orchestrator | Saturday 11 April 2026 00:53:57 +0000 (0:00:00.457) 0:08:20.189 ******** 2026-04-11 00:56:32.707047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.707051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.707054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.707058 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707062 | orchestrator | 2026-04-11 00:56:32.707066 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-11 00:56:32.707069 | orchestrator | Saturday 11 April 2026 00:53:58 +0000 (0:00:00.361) 0:08:20.551 ******** 2026-04-11 00:56:32.707073 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707077 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707081 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707084 | orchestrator | 2026-04-11 00:56:32.707088 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-11 00:56:32.707092 | orchestrator | Saturday 11 April 2026 00:53:58 +0000 (0:00:00.403) 0:08:20.954 ******** 2026-04-11 00:56:32.707095 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707099 | orchestrator | 2026-04-11 00:56:32.707103 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-11 00:56:32.707107 | orchestrator | Saturday 11 April 2026 00:53:58 +0000 (0:00:00.188) 0:08:21.143 ******** 2026-04-11 00:56:32.707110 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707114 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707118 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707121 | orchestrator | 2026-04-11 00:56:32.707125 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-11 00:56:32.707130 | orchestrator | Saturday 11 April 2026 00:53:58 +0000 (0:00:00.285) 0:08:21.428 ******** 2026-04-11 00:56:32.707139 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707147 | orchestrator | 2026-04-11 00:56:32.707153 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-11 00:56:32.707159 | orchestrator | Saturday 11 April 2026 00:53:59 +0000 (0:00:00.198) 0:08:21.626 ******** 2026-04-11 00:56:32.707165 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707170 | orchestrator | 2026-04-11 00:56:32.707176 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-11 00:56:32.707182 | orchestrator | Saturday 11 April 2026 00:53:59 +0000 (0:00:00.172) 0:08:21.799 ******** 2026-04-11 00:56:32.707189 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707195 | orchestrator | 2026-04-11 00:56:32.707205 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-11 00:56:32.707210 | orchestrator | Saturday 11 April 2026 00:53:59 +0000 (0:00:00.096) 0:08:21.895 ******** 2026-04-11 00:56:32.707213 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707217 | orchestrator | 2026-04-11 00:56:32.707221 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-11 00:56:32.707228 | orchestrator | Saturday 11 April 2026 00:53:59 +0000 (0:00:00.184) 0:08:22.080 ******** 2026-04-11 00:56:32.707232 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707236 | orchestrator | 2026-04-11 00:56:32.707242 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-11 00:56:32.707246 | orchestrator | Saturday 11 April 2026 00:53:59 +0000 (0:00:00.190) 0:08:22.270 ******** 2026-04-11 00:56:32.707250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.707253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.707257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.707261 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707265 | orchestrator | 2026-04-11 00:56:32.707268 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-11 00:56:32.707272 | orchestrator | Saturday 11 April 2026 00:54:00 +0000 (0:00:00.347) 0:08:22.618 ******** 2026-04-11 00:56:32.707276 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707280 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707283 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707287 | orchestrator | 2026-04-11 00:56:32.707291 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-11 00:56:32.707295 | orchestrator | Saturday 11 April 2026 00:54:00 +0000 (0:00:00.481) 0:08:23.100 ******** 2026-04-11 00:56:32.707298 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707302 | orchestrator | 2026-04-11 00:56:32.707306 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-11 00:56:32.707309 | orchestrator | Saturday 11 April 2026 00:54:00 +0000 (0:00:00.200) 0:08:23.301 ******** 2026-04-11 00:56:32.707313 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707317 | orchestrator | 2026-04-11 00:56:32.707321 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-11 00:56:32.707324 | orchestrator | 2026-04-11 00:56:32.707328 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 00:56:32.707332 | orchestrator | Saturday 11 April 2026 00:54:01 +0000 (0:00:00.606) 0:08:23.908 ******** 2026-04-11 00:56:32.707336 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.707340 | orchestrator | 2026-04-11 00:56:32.707344 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 00:56:32.707347 | orchestrator | Saturday 11 April 2026 00:54:02 +0000 (0:00:01.064) 0:08:24.972 ******** 2026-04-11 00:56:32.707351 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.707355 | orchestrator | 2026-04-11 00:56:32.707359 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 00:56:32.707362 | orchestrator | Saturday 11 April 2026 00:54:03 +0000 (0:00:01.082) 0:08:26.054 ******** 2026-04-11 00:56:32.707366 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707370 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707374 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707377 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.707381 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.707385 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.707389 | orchestrator | 2026-04-11 00:56:32.707392 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 00:56:32.707396 | orchestrator | Saturday 11 April 2026 00:54:04 +0000 (0:00:01.142) 0:08:27.197 ******** 2026-04-11 00:56:32.707400 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707404 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707407 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.707411 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.707418 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707421 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.707425 | orchestrator | 2026-04-11 00:56:32.707429 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 00:56:32.707433 | orchestrator | Saturday 11 April 2026 00:54:05 +0000 (0:00:00.741) 0:08:27.939 ******** 2026-04-11 00:56:32.707436 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707440 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707444 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.707448 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.707451 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.707455 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707459 | orchestrator | 2026-04-11 00:56:32.707462 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 00:56:32.707466 | orchestrator | Saturday 11 April 2026 00:54:06 +0000 (0:00:00.701) 0:08:28.640 ******** 2026-04-11 00:56:32.707470 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707474 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.707477 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707481 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.707575 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707590 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.707595 | orchestrator | 2026-04-11 00:56:32.707599 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 00:56:32.707603 | orchestrator | Saturday 11 April 2026 00:54:06 +0000 (0:00:00.767) 0:08:29.407 ******** 2026-04-11 00:56:32.707606 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707610 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707614 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707618 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.707622 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.707625 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.707629 | orchestrator | 2026-04-11 00:56:32.707638 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 00:56:32.707642 | orchestrator | Saturday 11 April 2026 00:54:07 +0000 (0:00:01.045) 0:08:30.453 ******** 2026-04-11 00:56:32.707646 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707649 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707653 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707657 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707661 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707665 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707668 | orchestrator | 2026-04-11 00:56:32.707675 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 00:56:32.707679 | orchestrator | Saturday 11 April 2026 00:54:08 +0000 (0:00:00.483) 0:08:30.936 ******** 2026-04-11 00:56:32.707683 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707687 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707690 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707694 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707698 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707702 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707705 | orchestrator | 2026-04-11 00:56:32.707709 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 00:56:32.707713 | orchestrator | Saturday 11 April 2026 00:54:09 +0000 (0:00:00.622) 0:08:31.559 ******** 2026-04-11 00:56:32.707717 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.707720 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.707724 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.707728 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.707732 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.707735 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.707739 | orchestrator | 2026-04-11 00:56:32.707743 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 00:56:32.707751 | orchestrator | Saturday 11 April 2026 00:54:09 +0000 (0:00:00.964) 0:08:32.523 ******** 2026-04-11 00:56:32.707754 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.707758 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.707762 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.707766 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.707769 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.707773 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.707777 | orchestrator | 2026-04-11 00:56:32.707780 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 00:56:32.707784 | orchestrator | Saturday 11 April 2026 00:54:10 +0000 (0:00:00.924) 0:08:33.448 ******** 2026-04-11 00:56:32.707788 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707792 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707795 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707799 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707803 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707807 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707810 | orchestrator | 2026-04-11 00:56:32.707814 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 00:56:32.707818 | orchestrator | Saturday 11 April 2026 00:54:11 +0000 (0:00:00.638) 0:08:34.086 ******** 2026-04-11 00:56:32.707821 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.707825 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.707829 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.707833 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.707836 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.707840 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.707844 | orchestrator | 2026-04-11 00:56:32.707848 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 00:56:32.707851 | orchestrator | Saturday 11 April 2026 00:54:11 +0000 (0:00:00.411) 0:08:34.497 ******** 2026-04-11 00:56:32.707855 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.707859 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.707863 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.707870 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707876 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707881 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707887 | orchestrator | 2026-04-11 00:56:32.707893 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 00:56:32.707899 | orchestrator | Saturday 11 April 2026 00:54:12 +0000 (0:00:00.558) 0:08:35.056 ******** 2026-04-11 00:56:32.707905 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.707911 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.707917 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.707923 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707929 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707935 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707941 | orchestrator | 2026-04-11 00:56:32.707947 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 00:56:32.707953 | orchestrator | Saturday 11 April 2026 00:54:13 +0000 (0:00:00.495) 0:08:35.552 ******** 2026-04-11 00:56:32.707959 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.707966 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.707972 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.707977 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.707981 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.707985 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.707989 | orchestrator | 2026-04-11 00:56:32.707992 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 00:56:32.707996 | orchestrator | Saturday 11 April 2026 00:54:13 +0000 (0:00:00.643) 0:08:36.195 ******** 2026-04-11 00:56:32.708000 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708004 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708007 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708015 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.708018 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.708022 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.708026 | orchestrator | 2026-04-11 00:56:32.708030 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 00:56:32.708033 | orchestrator | Saturday 11 April 2026 00:54:14 +0000 (0:00:00.486) 0:08:36.682 ******** 2026-04-11 00:56:32.708037 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708041 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708045 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708048 | orchestrator | skipping: [testbed-node-0] 2026-04-11 00:56:32.708052 | orchestrator | skipping: [testbed-node-1] 2026-04-11 00:56:32.708059 | orchestrator | skipping: [testbed-node-2] 2026-04-11 00:56:32.708063 | orchestrator | 2026-04-11 00:56:32.708072 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 00:56:32.708076 | orchestrator | Saturday 11 April 2026 00:54:14 +0000 (0:00:00.611) 0:08:37.293 ******** 2026-04-11 00:56:32.708080 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708084 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708087 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708091 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.708095 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.708099 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.708102 | orchestrator | 2026-04-11 00:56:32.708111 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 00:56:32.708115 | orchestrator | Saturday 11 April 2026 00:54:15 +0000 (0:00:00.488) 0:08:37.781 ******** 2026-04-11 00:56:32.708118 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708122 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708126 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708130 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.708133 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.708137 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.708141 | orchestrator | 2026-04-11 00:56:32.708145 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 00:56:32.708148 | orchestrator | Saturday 11 April 2026 00:54:15 +0000 (0:00:00.725) 0:08:38.507 ******** 2026-04-11 00:56:32.708152 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708156 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708159 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708163 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.708167 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.708171 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.708174 | orchestrator | 2026-04-11 00:56:32.708178 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-11 00:56:32.708182 | orchestrator | Saturday 11 April 2026 00:54:16 +0000 (0:00:00.913) 0:08:39.420 ******** 2026-04-11 00:56:32.708186 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.708189 | orchestrator | 2026-04-11 00:56:32.708193 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-11 00:56:32.708197 | orchestrator | Saturday 11 April 2026 00:54:21 +0000 (0:00:04.379) 0:08:43.800 ******** 2026-04-11 00:56:32.708201 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.708204 | orchestrator | 2026-04-11 00:56:32.708208 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-11 00:56:32.708212 | orchestrator | Saturday 11 April 2026 00:54:23 +0000 (0:00:02.295) 0:08:46.096 ******** 2026-04-11 00:56:32.708216 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.708219 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.708223 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.708227 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.708231 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.708234 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.708238 | orchestrator | 2026-04-11 00:56:32.708245 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-11 00:56:32.708249 | orchestrator | Saturday 11 April 2026 00:54:25 +0000 (0:00:01.472) 0:08:47.568 ******** 2026-04-11 00:56:32.708253 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.708257 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.708260 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.708264 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.708268 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.708271 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.708275 | orchestrator | 2026-04-11 00:56:32.708279 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-11 00:56:32.708283 | orchestrator | Saturday 11 April 2026 00:54:26 +0000 (0:00:01.118) 0:08:48.686 ******** 2026-04-11 00:56:32.708287 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.708293 | orchestrator | 2026-04-11 00:56:32.708299 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-11 00:56:32.708306 | orchestrator | Saturday 11 April 2026 00:54:27 +0000 (0:00:00.991) 0:08:49.678 ******** 2026-04-11 00:56:32.708312 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.708343 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.708351 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.708357 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.708376 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.708381 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.708385 | orchestrator | 2026-04-11 00:56:32.708389 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-11 00:56:32.708393 | orchestrator | Saturday 11 April 2026 00:54:28 +0000 (0:00:01.496) 0:08:51.174 ******** 2026-04-11 00:56:32.708403 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.708408 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.708411 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.708415 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.708419 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.708422 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.708426 | orchestrator | 2026-04-11 00:56:32.708430 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-11 00:56:32.708434 | orchestrator | Saturday 11 April 2026 00:54:31 +0000 (0:00:03.315) 0:08:54.490 ******** 2026-04-11 00:56:32.708438 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 00:56:32.708442 | orchestrator | 2026-04-11 00:56:32.708446 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-11 00:56:32.708449 | orchestrator | Saturday 11 April 2026 00:54:32 +0000 (0:00:01.001) 0:08:55.491 ******** 2026-04-11 00:56:32.708453 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708457 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708461 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708465 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.708468 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.708476 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.708480 | orchestrator | 2026-04-11 00:56:32.708507 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-11 00:56:32.708512 | orchestrator | Saturday 11 April 2026 00:54:33 +0000 (0:00:00.536) 0:08:56.028 ******** 2026-04-11 00:56:32.708516 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.708520 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.708524 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.708528 | orchestrator | changed: [testbed-node-0] 2026-04-11 00:56:32.708531 | orchestrator | changed: [testbed-node-2] 2026-04-11 00:56:32.708538 | orchestrator | changed: [testbed-node-1] 2026-04-11 00:56:32.708542 | orchestrator | 2026-04-11 00:56:32.708552 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-11 00:56:32.708558 | orchestrator | Saturday 11 April 2026 00:54:35 +0000 (0:00:01.968) 0:08:57.997 ******** 2026-04-11 00:56:32.708565 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708571 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708577 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708583 | orchestrator | ok: [testbed-node-0] 2026-04-11 00:56:32.708589 | orchestrator | ok: [testbed-node-1] 2026-04-11 00:56:32.708596 | orchestrator | ok: [testbed-node-2] 2026-04-11 00:56:32.708602 | orchestrator | 2026-04-11 00:56:32.708609 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-11 00:56:32.708613 | orchestrator | 2026-04-11 00:56:32.708617 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 00:56:32.708620 | orchestrator | Saturday 11 April 2026 00:54:36 +0000 (0:00:00.719) 0:08:58.716 ******** 2026-04-11 00:56:32.708624 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.708628 | orchestrator | 2026-04-11 00:56:32.708632 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 00:56:32.708636 | orchestrator | Saturday 11 April 2026 00:54:36 +0000 (0:00:00.629) 0:08:59.346 ******** 2026-04-11 00:56:32.708640 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.708643 | orchestrator | 2026-04-11 00:56:32.708647 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 00:56:32.708651 | orchestrator | Saturday 11 April 2026 00:54:37 +0000 (0:00:00.477) 0:08:59.823 ******** 2026-04-11 00:56:32.708654 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708658 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708662 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708666 | orchestrator | 2026-04-11 00:56:32.708669 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 00:56:32.708673 | orchestrator | Saturday 11 April 2026 00:54:37 +0000 (0:00:00.518) 0:09:00.342 ******** 2026-04-11 00:56:32.708677 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708680 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708684 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708688 | orchestrator | 2026-04-11 00:56:32.708692 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 00:56:32.708695 | orchestrator | Saturday 11 April 2026 00:54:38 +0000 (0:00:00.698) 0:09:01.040 ******** 2026-04-11 00:56:32.708699 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708703 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708707 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708710 | orchestrator | 2026-04-11 00:56:32.708714 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 00:56:32.708718 | orchestrator | Saturday 11 April 2026 00:54:39 +0000 (0:00:00.715) 0:09:01.755 ******** 2026-04-11 00:56:32.708722 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708725 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708729 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708733 | orchestrator | 2026-04-11 00:56:32.708737 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 00:56:32.708740 | orchestrator | Saturday 11 April 2026 00:54:39 +0000 (0:00:00.718) 0:09:02.474 ******** 2026-04-11 00:56:32.708744 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708748 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708752 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708755 | orchestrator | 2026-04-11 00:56:32.708759 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 00:56:32.708763 | orchestrator | Saturday 11 April 2026 00:54:40 +0000 (0:00:00.569) 0:09:03.043 ******** 2026-04-11 00:56:32.708767 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708770 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708777 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708781 | orchestrator | 2026-04-11 00:56:32.708785 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 00:56:32.708789 | orchestrator | Saturday 11 April 2026 00:54:40 +0000 (0:00:00.298) 0:09:03.342 ******** 2026-04-11 00:56:32.708792 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708796 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708800 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708803 | orchestrator | 2026-04-11 00:56:32.708807 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 00:56:32.708811 | orchestrator | Saturday 11 April 2026 00:54:41 +0000 (0:00:00.290) 0:09:03.632 ******** 2026-04-11 00:56:32.708815 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708818 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708822 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708826 | orchestrator | 2026-04-11 00:56:32.708830 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 00:56:32.708833 | orchestrator | Saturday 11 April 2026 00:54:41 +0000 (0:00:00.743) 0:09:04.376 ******** 2026-04-11 00:56:32.708837 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708841 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708844 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708848 | orchestrator | 2026-04-11 00:56:32.708852 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 00:56:32.708856 | orchestrator | Saturday 11 April 2026 00:54:42 +0000 (0:00:01.019) 0:09:05.396 ******** 2026-04-11 00:56:32.708860 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708863 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708870 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708874 | orchestrator | 2026-04-11 00:56:32.708878 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 00:56:32.708882 | orchestrator | Saturday 11 April 2026 00:54:43 +0000 (0:00:00.311) 0:09:05.707 ******** 2026-04-11 00:56:32.708885 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.708889 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.708893 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.708896 | orchestrator | 2026-04-11 00:56:32.708900 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 00:56:32.708906 | orchestrator | Saturday 11 April 2026 00:54:43 +0000 (0:00:00.301) 0:09:06.009 ******** 2026-04-11 00:56:32.708910 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708914 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708917 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708922 | orchestrator | 2026-04-11 00:56:32.708928 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 00:56:32.708935 | orchestrator | Saturday 11 April 2026 00:54:43 +0000 (0:00:00.335) 0:09:06.344 ******** 2026-04-11 00:56:32.708941 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708947 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708954 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708960 | orchestrator | 2026-04-11 00:56:32.708966 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 00:56:32.708973 | orchestrator | Saturday 11 April 2026 00:54:44 +0000 (0:00:00.631) 0:09:06.976 ******** 2026-04-11 00:56:32.708977 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.708981 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.708984 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.708988 | orchestrator | 2026-04-11 00:56:32.708992 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 00:56:32.708996 | orchestrator | Saturday 11 April 2026 00:54:44 +0000 (0:00:00.337) 0:09:07.314 ******** 2026-04-11 00:56:32.709000 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.709003 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.709007 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.709016 | orchestrator | 2026-04-11 00:56:32.709022 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 00:56:32.709029 | orchestrator | Saturday 11 April 2026 00:54:45 +0000 (0:00:00.281) 0:09:07.595 ******** 2026-04-11 00:56:32.709035 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.709043 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.709049 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.709063 | orchestrator | 2026-04-11 00:56:32.709073 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 00:56:32.709077 | orchestrator | Saturday 11 April 2026 00:54:45 +0000 (0:00:00.313) 0:09:07.909 ******** 2026-04-11 00:56:32.709081 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.709085 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.709089 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.709092 | orchestrator | 2026-04-11 00:56:32.709098 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 00:56:32.709104 | orchestrator | Saturday 11 April 2026 00:54:45 +0000 (0:00:00.532) 0:09:08.441 ******** 2026-04-11 00:56:32.709110 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.709116 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.709123 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.709129 | orchestrator | 2026-04-11 00:56:32.709137 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 00:56:32.709144 | orchestrator | Saturday 11 April 2026 00:54:46 +0000 (0:00:00.318) 0:09:08.760 ******** 2026-04-11 00:56:32.709151 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.709158 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.709162 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.709165 | orchestrator | 2026-04-11 00:56:32.709169 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-11 00:56:32.709173 | orchestrator | Saturday 11 April 2026 00:54:46 +0000 (0:00:00.529) 0:09:09.290 ******** 2026-04-11 00:56:32.709177 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.709180 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.709184 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-11 00:56:32.709188 | orchestrator | 2026-04-11 00:56:32.709192 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-11 00:56:32.709196 | orchestrator | Saturday 11 April 2026 00:54:47 +0000 (0:00:00.625) 0:09:09.915 ******** 2026-04-11 00:56:32.709200 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.709203 | orchestrator | 2026-04-11 00:56:32.709207 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-11 00:56:32.709211 | orchestrator | Saturday 11 April 2026 00:54:49 +0000 (0:00:01.910) 0:09:11.826 ******** 2026-04-11 00:56:32.709216 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-11 00:56:32.709221 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.709225 | orchestrator | 2026-04-11 00:56:32.709228 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-11 00:56:32.709232 | orchestrator | Saturday 11 April 2026 00:54:49 +0000 (0:00:00.208) 0:09:12.034 ******** 2026-04-11 00:56:32.709237 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 00:56:32.709248 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 00:56:32.709252 | orchestrator | 2026-04-11 00:56:32.709259 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-11 00:56:32.709263 | orchestrator | Saturday 11 April 2026 00:54:57 +0000 (0:00:08.057) 0:09:20.091 ******** 2026-04-11 00:56:32.709266 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 00:56:32.709270 | orchestrator | 2026-04-11 00:56:32.709275 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-11 00:56:32.709284 | orchestrator | Saturday 11 April 2026 00:55:01 +0000 (0:00:03.687) 0:09:23.779 ******** 2026-04-11 00:56:32.709291 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.709297 | orchestrator | 2026-04-11 00:56:32.709304 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-11 00:56:32.709309 | orchestrator | Saturday 11 April 2026 00:55:01 +0000 (0:00:00.744) 0:09:24.524 ******** 2026-04-11 00:56:32.709313 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 00:56:32.709317 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 00:56:32.709321 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 00:56:32.709324 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-11 00:56:32.709328 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-11 00:56:32.709332 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-11 00:56:32.709335 | orchestrator | 2026-04-11 00:56:32.709339 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-11 00:56:32.709343 | orchestrator | Saturday 11 April 2026 00:55:03 +0000 (0:00:01.151) 0:09:25.675 ******** 2026-04-11 00:56:32.709347 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.709350 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 00:56:32.709354 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 00:56:32.709358 | orchestrator | 2026-04-11 00:56:32.709362 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-11 00:56:32.709365 | orchestrator | Saturday 11 April 2026 00:55:05 +0000 (0:00:02.283) 0:09:27.958 ******** 2026-04-11 00:56:32.709369 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 00:56:32.709373 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 00:56:32.709377 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.709380 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 00:56:32.709384 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 00:56:32.709389 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.709396 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 00:56:32.709402 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 00:56:32.709408 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.709415 | orchestrator | 2026-04-11 00:56:32.709421 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-11 00:56:32.709427 | orchestrator | Saturday 11 April 2026 00:55:06 +0000 (0:00:01.344) 0:09:29.303 ******** 2026-04-11 00:56:32.709434 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.709440 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.709447 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.709469 | orchestrator | 2026-04-11 00:56:32.709477 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-11 00:56:32.709483 | orchestrator | Saturday 11 April 2026 00:55:09 +0000 (0:00:02.647) 0:09:31.950 ******** 2026-04-11 00:56:32.709497 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.709501 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.709505 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.709509 | orchestrator | 2026-04-11 00:56:32.709513 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-11 00:56:32.709521 | orchestrator | Saturday 11 April 2026 00:55:10 +0000 (0:00:00.656) 0:09:32.607 ******** 2026-04-11 00:56:32.709524 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.709528 | orchestrator | 2026-04-11 00:56:32.709532 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-11 00:56:32.709536 | orchestrator | Saturday 11 April 2026 00:55:10 +0000 (0:00:00.571) 0:09:33.178 ******** 2026-04-11 00:56:32.709539 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.709543 | orchestrator | 2026-04-11 00:56:32.709547 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-11 00:56:32.709551 | orchestrator | Saturday 11 April 2026 00:55:11 +0000 (0:00:00.651) 0:09:33.830 ******** 2026-04-11 00:56:32.709554 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.709558 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.709562 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.709566 | orchestrator | 2026-04-11 00:56:32.709569 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-11 00:56:32.709573 | orchestrator | Saturday 11 April 2026 00:55:12 +0000 (0:00:01.264) 0:09:35.095 ******** 2026-04-11 00:56:32.709577 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.709585 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.709589 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.709593 | orchestrator | 2026-04-11 00:56:32.709597 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-11 00:56:32.709600 | orchestrator | Saturday 11 April 2026 00:55:13 +0000 (0:00:01.134) 0:09:36.230 ******** 2026-04-11 00:56:32.709604 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.709608 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.709612 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.709615 | orchestrator | 2026-04-11 00:56:32.709623 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-11 00:56:32.709627 | orchestrator | Saturday 11 April 2026 00:55:15 +0000 (0:00:01.849) 0:09:38.079 ******** 2026-04-11 00:56:32.709631 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.709635 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.709638 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.709642 | orchestrator | 2026-04-11 00:56:32.709646 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-11 00:56:32.709653 | orchestrator | Saturday 11 April 2026 00:55:17 +0000 (0:00:02.322) 0:09:40.401 ******** 2026-04-11 00:56:32.709657 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.709661 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.709664 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.709668 | orchestrator | 2026-04-11 00:56:32.709672 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 00:56:32.709676 | orchestrator | Saturday 11 April 2026 00:55:19 +0000 (0:00:01.188) 0:09:41.590 ******** 2026-04-11 00:56:32.709679 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.709683 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.709687 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.709691 | orchestrator | 2026-04-11 00:56:32.709694 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 00:56:32.709698 | orchestrator | Saturday 11 April 2026 00:55:20 +0000 (0:00:01.003) 0:09:42.593 ******** 2026-04-11 00:56:32.709702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.709706 | orchestrator | 2026-04-11 00:56:32.709709 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-11 00:56:32.709713 | orchestrator | Saturday 11 April 2026 00:55:20 +0000 (0:00:00.541) 0:09:43.135 ******** 2026-04-11 00:56:32.709717 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.709721 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.709727 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.709731 | orchestrator | 2026-04-11 00:56:32.709735 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-11 00:56:32.709741 | orchestrator | Saturday 11 April 2026 00:55:20 +0000 (0:00:00.327) 0:09:43.462 ******** 2026-04-11 00:56:32.709747 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.709752 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.709762 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.709768 | orchestrator | 2026-04-11 00:56:32.709775 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-11 00:56:32.709780 | orchestrator | Saturday 11 April 2026 00:55:22 +0000 (0:00:01.496) 0:09:44.959 ******** 2026-04-11 00:56:32.709786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.709792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.709798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.709804 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.709810 | orchestrator | 2026-04-11 00:56:32.709816 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-11 00:56:32.709822 | orchestrator | Saturday 11 April 2026 00:55:23 +0000 (0:00:00.701) 0:09:45.660 ******** 2026-04-11 00:56:32.709828 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.709834 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.709841 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.709847 | orchestrator | 2026-04-11 00:56:32.709854 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-11 00:56:32.709860 | orchestrator | 2026-04-11 00:56:32.709866 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 00:56:32.709872 | orchestrator | Saturday 11 April 2026 00:55:23 +0000 (0:00:00.537) 0:09:46.198 ******** 2026-04-11 00:56:32.709879 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.709885 | orchestrator | 2026-04-11 00:56:32.709891 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 00:56:32.709898 | orchestrator | Saturday 11 April 2026 00:55:24 +0000 (0:00:00.663) 0:09:46.861 ******** 2026-04-11 00:56:32.709904 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.709910 | orchestrator | 2026-04-11 00:56:32.709917 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 00:56:32.709923 | orchestrator | Saturday 11 April 2026 00:55:24 +0000 (0:00:00.499) 0:09:47.360 ******** 2026-04-11 00:56:32.709929 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.709935 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.709942 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.709948 | orchestrator | 2026-04-11 00:56:32.709954 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 00:56:32.709960 | orchestrator | Saturday 11 April 2026 00:55:25 +0000 (0:00:00.286) 0:09:47.647 ******** 2026-04-11 00:56:32.709966 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.709972 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.709977 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.709983 | orchestrator | 2026-04-11 00:56:32.709990 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 00:56:32.709996 | orchestrator | Saturday 11 April 2026 00:55:26 +0000 (0:00:00.963) 0:09:48.610 ******** 2026-04-11 00:56:32.710002 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710008 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710045 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710051 | orchestrator | 2026-04-11 00:56:32.710058 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 00:56:32.710065 | orchestrator | Saturday 11 April 2026 00:55:26 +0000 (0:00:00.724) 0:09:49.335 ******** 2026-04-11 00:56:32.710078 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710084 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710091 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710098 | orchestrator | 2026-04-11 00:56:32.710104 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 00:56:32.710115 | orchestrator | Saturday 11 April 2026 00:55:27 +0000 (0:00:00.674) 0:09:50.009 ******** 2026-04-11 00:56:32.710122 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710129 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710135 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710142 | orchestrator | 2026-04-11 00:56:32.710149 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 00:56:32.710156 | orchestrator | Saturday 11 April 2026 00:55:27 +0000 (0:00:00.299) 0:09:50.309 ******** 2026-04-11 00:56:32.710162 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710172 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710179 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710186 | orchestrator | 2026-04-11 00:56:32.710192 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 00:56:32.710199 | orchestrator | Saturday 11 April 2026 00:55:28 +0000 (0:00:00.533) 0:09:50.842 ******** 2026-04-11 00:56:32.710206 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710212 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710218 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710225 | orchestrator | 2026-04-11 00:56:32.710232 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 00:56:32.710238 | orchestrator | Saturday 11 April 2026 00:55:28 +0000 (0:00:00.280) 0:09:51.123 ******** 2026-04-11 00:56:32.710245 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710252 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710258 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710265 | orchestrator | 2026-04-11 00:56:32.710271 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 00:56:32.710278 | orchestrator | Saturday 11 April 2026 00:55:29 +0000 (0:00:00.728) 0:09:51.851 ******** 2026-04-11 00:56:32.710284 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710291 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710298 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710304 | orchestrator | 2026-04-11 00:56:32.710311 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 00:56:32.710318 | orchestrator | Saturday 11 April 2026 00:55:30 +0000 (0:00:00.682) 0:09:52.534 ******** 2026-04-11 00:56:32.710325 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710331 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710338 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710344 | orchestrator | 2026-04-11 00:56:32.710351 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 00:56:32.710358 | orchestrator | Saturday 11 April 2026 00:55:30 +0000 (0:00:00.426) 0:09:52.961 ******** 2026-04-11 00:56:32.710364 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710371 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710377 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710384 | orchestrator | 2026-04-11 00:56:32.710391 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 00:56:32.710397 | orchestrator | Saturday 11 April 2026 00:55:30 +0000 (0:00:00.281) 0:09:53.242 ******** 2026-04-11 00:56:32.710404 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710411 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710417 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710424 | orchestrator | 2026-04-11 00:56:32.710430 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 00:56:32.710437 | orchestrator | Saturday 11 April 2026 00:55:30 +0000 (0:00:00.277) 0:09:53.519 ******** 2026-04-11 00:56:32.710444 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710450 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710460 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710467 | orchestrator | 2026-04-11 00:56:32.710474 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 00:56:32.710480 | orchestrator | Saturday 11 April 2026 00:55:31 +0000 (0:00:00.292) 0:09:53.811 ******** 2026-04-11 00:56:32.710498 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710505 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710512 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710518 | orchestrator | 2026-04-11 00:56:32.710525 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 00:56:32.710531 | orchestrator | Saturday 11 April 2026 00:55:31 +0000 (0:00:00.429) 0:09:54.241 ******** 2026-04-11 00:56:32.710538 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710544 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710551 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710558 | orchestrator | 2026-04-11 00:56:32.710564 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 00:56:32.710571 | orchestrator | Saturday 11 April 2026 00:55:31 +0000 (0:00:00.248) 0:09:54.490 ******** 2026-04-11 00:56:32.710578 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710584 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710591 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710597 | orchestrator | 2026-04-11 00:56:32.710604 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 00:56:32.710611 | orchestrator | Saturday 11 April 2026 00:55:32 +0000 (0:00:00.291) 0:09:54.781 ******** 2026-04-11 00:56:32.710617 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710623 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710630 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710637 | orchestrator | 2026-04-11 00:56:32.710644 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 00:56:32.710651 | orchestrator | Saturday 11 April 2026 00:55:32 +0000 (0:00:00.248) 0:09:55.029 ******** 2026-04-11 00:56:32.710657 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710664 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710670 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710677 | orchestrator | 2026-04-11 00:56:32.710684 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 00:56:32.710690 | orchestrator | Saturday 11 April 2026 00:55:32 +0000 (0:00:00.474) 0:09:55.504 ******** 2026-04-11 00:56:32.710697 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.710704 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.710710 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.710717 | orchestrator | 2026-04-11 00:56:32.710724 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-11 00:56:32.710734 | orchestrator | Saturday 11 April 2026 00:55:33 +0000 (0:00:00.444) 0:09:55.949 ******** 2026-04-11 00:56:32.710740 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.710747 | orchestrator | 2026-04-11 00:56:32.710754 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 00:56:32.710761 | orchestrator | Saturday 11 April 2026 00:55:34 +0000 (0:00:00.594) 0:09:56.544 ******** 2026-04-11 00:56:32.710767 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.710780 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 00:56:32.710787 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 00:56:32.710794 | orchestrator | 2026-04-11 00:56:32.710800 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 00:56:32.710807 | orchestrator | Saturday 11 April 2026 00:55:36 +0000 (0:00:02.287) 0:09:58.831 ******** 2026-04-11 00:56:32.710813 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 00:56:32.710820 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 00:56:32.710826 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.710837 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 00:56:32.710843 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 00:56:32.710850 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.710856 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 00:56:32.710863 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 00:56:32.710869 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.710876 | orchestrator | 2026-04-11 00:56:32.710888 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-11 00:56:32.710894 | orchestrator | Saturday 11 April 2026 00:55:37 +0000 (0:00:01.303) 0:10:00.135 ******** 2026-04-11 00:56:32.710900 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.710906 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.710912 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.710918 | orchestrator | 2026-04-11 00:56:32.710925 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-11 00:56:32.710931 | orchestrator | Saturday 11 April 2026 00:55:37 +0000 (0:00:00.286) 0:10:00.421 ******** 2026-04-11 00:56:32.710937 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.710943 | orchestrator | 2026-04-11 00:56:32.710949 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-11 00:56:32.710956 | orchestrator | Saturday 11 April 2026 00:55:38 +0000 (0:00:00.620) 0:10:01.042 ******** 2026-04-11 00:56:32.710962 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.710969 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.710975 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.710982 | orchestrator | 2026-04-11 00:56:32.710988 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-11 00:56:32.710995 | orchestrator | Saturday 11 April 2026 00:55:39 +0000 (0:00:00.823) 0:10:01.866 ******** 2026-04-11 00:56:32.711002 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.711008 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 00:56:32.711015 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.711022 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 00:56:32.711028 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.711034 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 00:56:32.711040 | orchestrator | 2026-04-11 00:56:32.711046 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 00:56:32.711052 | orchestrator | Saturday 11 April 2026 00:55:43 +0000 (0:00:04.586) 0:10:06.452 ******** 2026-04-11 00:56:32.711058 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.711064 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 00:56:32.711070 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.711076 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 00:56:32.711082 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:56:32.711088 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 00:56:32.711099 | orchestrator | 2026-04-11 00:56:32.711106 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 00:56:32.711110 | orchestrator | Saturday 11 April 2026 00:55:46 +0000 (0:00:02.878) 0:10:09.330 ******** 2026-04-11 00:56:32.711113 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 00:56:32.711117 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.711121 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 00:56:32.711133 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.711138 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 00:56:32.711141 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.711145 | orchestrator | 2026-04-11 00:56:32.711149 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-11 00:56:32.711153 | orchestrator | Saturday 11 April 2026 00:55:48 +0000 (0:00:01.252) 0:10:10.583 ******** 2026-04-11 00:56:32.711157 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-11 00:56:32.711160 | orchestrator | 2026-04-11 00:56:32.711167 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-11 00:56:32.711174 | orchestrator | Saturday 11 April 2026 00:55:48 +0000 (0:00:00.252) 0:10:10.836 ******** 2026-04-11 00:56:32.711180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711214 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.711221 | orchestrator | 2026-04-11 00:56:32.711228 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-11 00:56:32.711235 | orchestrator | Saturday 11 April 2026 00:55:48 +0000 (0:00:00.558) 0:10:11.394 ******** 2026-04-11 00:56:32.711242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 00:56:32.711264 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.711267 | orchestrator | 2026-04-11 00:56:32.711271 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-11 00:56:32.711275 | orchestrator | Saturday 11 April 2026 00:55:49 +0000 (0:00:00.565) 0:10:11.961 ******** 2026-04-11 00:56:32.711279 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 00:56:32.711283 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 00:56:32.711292 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 00:56:32.711296 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 00:56:32.711299 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 00:56:32.711303 | orchestrator | 2026-04-11 00:56:32.711307 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-11 00:56:32.711310 | orchestrator | Saturday 11 April 2026 00:56:17 +0000 (0:00:27.959) 0:10:39.920 ******** 2026-04-11 00:56:32.711314 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.711318 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.711322 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.711325 | orchestrator | 2026-04-11 00:56:32.711329 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-11 00:56:32.711333 | orchestrator | Saturday 11 April 2026 00:56:17 +0000 (0:00:00.317) 0:10:40.237 ******** 2026-04-11 00:56:32.711337 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.711340 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.711344 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.711348 | orchestrator | 2026-04-11 00:56:32.711351 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-11 00:56:32.711355 | orchestrator | Saturday 11 April 2026 00:56:18 +0000 (0:00:00.547) 0:10:40.785 ******** 2026-04-11 00:56:32.711359 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.711363 | orchestrator | 2026-04-11 00:56:32.711366 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-11 00:56:32.711373 | orchestrator | Saturday 11 April 2026 00:56:18 +0000 (0:00:00.526) 0:10:41.312 ******** 2026-04-11 00:56:32.711377 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.711380 | orchestrator | 2026-04-11 00:56:32.711384 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-11 00:56:32.711388 | orchestrator | Saturday 11 April 2026 00:56:19 +0000 (0:00:00.698) 0:10:42.011 ******** 2026-04-11 00:56:32.711392 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.711395 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.711402 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.711405 | orchestrator | 2026-04-11 00:56:32.711409 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-11 00:56:32.711413 | orchestrator | Saturday 11 April 2026 00:56:20 +0000 (0:00:01.378) 0:10:43.389 ******** 2026-04-11 00:56:32.711417 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.711420 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.711424 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.711428 | orchestrator | 2026-04-11 00:56:32.711432 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-11 00:56:32.711435 | orchestrator | Saturday 11 April 2026 00:56:22 +0000 (0:00:01.242) 0:10:44.632 ******** 2026-04-11 00:56:32.711439 | orchestrator | changed: [testbed-node-4] 2026-04-11 00:56:32.711443 | orchestrator | changed: [testbed-node-3] 2026-04-11 00:56:32.711447 | orchestrator | changed: [testbed-node-5] 2026-04-11 00:56:32.711450 | orchestrator | 2026-04-11 00:56:32.711454 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-11 00:56:32.711458 | orchestrator | Saturday 11 April 2026 00:56:24 +0000 (0:00:01.960) 0:10:46.592 ******** 2026-04-11 00:56:32.711462 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.711465 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.711472 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 00:56:32.711476 | orchestrator | 2026-04-11 00:56:32.711479 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 00:56:32.711499 | orchestrator | Saturday 11 April 2026 00:56:27 +0000 (0:00:02.957) 0:10:49.550 ******** 2026-04-11 00:56:32.711504 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.711508 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.711512 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.711516 | orchestrator | 2026-04-11 00:56:32.711519 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 00:56:32.711523 | orchestrator | Saturday 11 April 2026 00:56:27 +0000 (0:00:00.304) 0:10:49.854 ******** 2026-04-11 00:56:32.711527 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:56:32.711531 | orchestrator | 2026-04-11 00:56:32.711534 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-11 00:56:32.711538 | orchestrator | Saturday 11 April 2026 00:56:28 +0000 (0:00:00.797) 0:10:50.652 ******** 2026-04-11 00:56:32.711542 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.711546 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.711549 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.711553 | orchestrator | 2026-04-11 00:56:32.711557 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-11 00:56:32.711561 | orchestrator | Saturday 11 April 2026 00:56:28 +0000 (0:00:00.329) 0:10:50.981 ******** 2026-04-11 00:56:32.711564 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.711568 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:56:32.711572 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:56:32.711576 | orchestrator | 2026-04-11 00:56:32.711579 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-11 00:56:32.711583 | orchestrator | Saturday 11 April 2026 00:56:28 +0000 (0:00:00.324) 0:10:51.306 ******** 2026-04-11 00:56:32.711587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:56:32.711591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:56:32.711594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:56:32.711598 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:56:32.711602 | orchestrator | 2026-04-11 00:56:32.711606 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-11 00:56:32.711609 | orchestrator | Saturday 11 April 2026 00:56:29 +0000 (0:00:00.833) 0:10:52.139 ******** 2026-04-11 00:56:32.711613 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:56:32.711617 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:56:32.711621 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:56:32.711624 | orchestrator | 2026-04-11 00:56:32.711628 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:56:32.711632 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-11 00:56:32.711636 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-11 00:56:32.711640 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-11 00:56:32.711644 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-11 00:56:32.711650 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-11 00:56:32.711657 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-11 00:56:32.711661 | orchestrator | 2026-04-11 00:56:32.711664 | orchestrator | 2026-04-11 00:56:32.711668 | orchestrator | 2026-04-11 00:56:32.711672 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:56:32.711677 | orchestrator | Saturday 11 April 2026 00:56:30 +0000 (0:00:00.474) 0:10:52.614 ******** 2026-04-11 00:56:32.711681 | orchestrator | =============================================================================== 2026-04-11 00:56:32.711685 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.65s 2026-04-11 00:56:32.711689 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.36s 2026-04-11 00:56:32.711693 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 42.01s 2026-04-11 00:56:32.711696 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 27.96s 2026-04-11 00:56:32.711700 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.20s 2026-04-11 00:56:32.711704 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.78s 2026-04-11 00:56:32.711708 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.56s 2026-04-11 00:56:32.711711 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.66s 2026-04-11 00:56:32.711715 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.19s 2026-04-11 00:56:32.711719 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.06s 2026-04-11 00:56:32.711722 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.75s 2026-04-11 00:56:32.711726 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.29s 2026-04-11 00:56:32.711730 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.96s 2026-04-11 00:56:32.711734 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.59s 2026-04-11 00:56:32.711737 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.38s 2026-04-11 00:56:32.711741 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.22s 2026-04-11 00:56:32.711745 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.17s 2026-04-11 00:56:32.711749 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.69s 2026-04-11 00:56:32.711752 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.54s 2026-04-11 00:56:32.711756 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.32s 2026-04-11 00:56:32.711760 | orchestrator | 2026-04-11 00:56:32 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:32.711764 | orchestrator | 2026-04-11 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:35.748371 | orchestrator | 2026-04-11 00:56:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:35.750474 | orchestrator | 2026-04-11 00:56:35 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:35.750573 | orchestrator | 2026-04-11 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:38.800114 | orchestrator | 2026-04-11 00:56:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:38.801559 | orchestrator | 2026-04-11 00:56:38 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:38.801733 | orchestrator | 2026-04-11 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:41.846071 | orchestrator | 2026-04-11 00:56:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:41.847864 | orchestrator | 2026-04-11 00:56:41 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:41.847917 | orchestrator | 2026-04-11 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:44.890158 | orchestrator | 2026-04-11 00:56:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:44.891486 | orchestrator | 2026-04-11 00:56:44 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:44.891531 | orchestrator | 2026-04-11 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:47.931368 | orchestrator | 2026-04-11 00:56:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:47.932538 | orchestrator | 2026-04-11 00:56:47 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:47.932569 | orchestrator | 2026-04-11 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:50.974077 | orchestrator | 2026-04-11 00:56:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:50.974422 | orchestrator | 2026-04-11 00:56:50 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:50.974697 | orchestrator | 2026-04-11 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:54.013235 | orchestrator | 2026-04-11 00:56:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:54.014634 | orchestrator | 2026-04-11 00:56:54 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:54.014682 | orchestrator | 2026-04-11 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:56:57.063532 | orchestrator | 2026-04-11 00:56:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:56:57.065985 | orchestrator | 2026-04-11 00:56:57 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:56:57.066307 | orchestrator | 2026-04-11 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:00.107938 | orchestrator | 2026-04-11 00:57:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:00.108915 | orchestrator | 2026-04-11 00:57:00 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:00.109022 | orchestrator | 2026-04-11 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:03.158705 | orchestrator | 2026-04-11 00:57:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:03.160845 | orchestrator | 2026-04-11 00:57:03 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:03.160903 | orchestrator | 2026-04-11 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:06.209869 | orchestrator | 2026-04-11 00:57:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:06.211735 | orchestrator | 2026-04-11 00:57:06 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:06.211781 | orchestrator | 2026-04-11 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:09.262197 | orchestrator | 2026-04-11 00:57:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:09.264691 | orchestrator | 2026-04-11 00:57:09 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:09.265147 | orchestrator | 2026-04-11 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:12.307863 | orchestrator | 2026-04-11 00:57:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:12.308012 | orchestrator | 2026-04-11 00:57:12 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:12.308026 | orchestrator | 2026-04-11 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:15.362822 | orchestrator | 2026-04-11 00:57:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:15.364663 | orchestrator | 2026-04-11 00:57:15 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:15.364766 | orchestrator | 2026-04-11 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:18.408502 | orchestrator | 2026-04-11 00:57:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:18.410440 | orchestrator | 2026-04-11 00:57:18 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:18.410481 | orchestrator | 2026-04-11 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:21.456768 | orchestrator | 2026-04-11 00:57:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:21.458497 | orchestrator | 2026-04-11 00:57:21 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:21.458692 | orchestrator | 2026-04-11 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:24.500827 | orchestrator | 2026-04-11 00:57:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:24.503510 | orchestrator | 2026-04-11 00:57:24 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:24.503567 | orchestrator | 2026-04-11 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:27.549934 | orchestrator | 2026-04-11 00:57:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:27.551061 | orchestrator | 2026-04-11 00:57:27 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:27.551113 | orchestrator | 2026-04-11 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:30.595911 | orchestrator | 2026-04-11 00:57:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:30.598371 | orchestrator | 2026-04-11 00:57:30 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:30.598426 | orchestrator | 2026-04-11 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:33.647784 | orchestrator | 2026-04-11 00:57:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:33.649806 | orchestrator | 2026-04-11 00:57:33 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:33.649880 | orchestrator | 2026-04-11 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:36.698907 | orchestrator | 2026-04-11 00:57:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:36.700312 | orchestrator | 2026-04-11 00:57:36 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:36.700441 | orchestrator | 2026-04-11 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:39.752532 | orchestrator | 2026-04-11 00:57:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:39.754723 | orchestrator | 2026-04-11 00:57:39 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:39.754781 | orchestrator | 2026-04-11 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:42.796819 | orchestrator | 2026-04-11 00:57:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:42.798344 | orchestrator | 2026-04-11 00:57:42 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:42.798386 | orchestrator | 2026-04-11 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:45.843512 | orchestrator | 2026-04-11 00:57:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:45.845563 | orchestrator | 2026-04-11 00:57:45 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:45.845700 | orchestrator | 2026-04-11 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:48.888422 | orchestrator | 2026-04-11 00:57:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:48.890164 | orchestrator | 2026-04-11 00:57:48 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:48.890209 | orchestrator | 2026-04-11 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:51.936873 | orchestrator | 2026-04-11 00:57:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:51.938624 | orchestrator | 2026-04-11 00:57:51 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:51.938671 | orchestrator | 2026-04-11 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:54.988102 | orchestrator | 2026-04-11 00:57:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:54.989710 | orchestrator | 2026-04-11 00:57:54 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:54.989951 | orchestrator | 2026-04-11 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:57:58.035100 | orchestrator | 2026-04-11 00:57:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:57:58.037849 | orchestrator | 2026-04-11 00:57:58 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:57:58.037936 | orchestrator | 2026-04-11 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:01.069857 | orchestrator | 2026-04-11 00:58:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:01.072052 | orchestrator | 2026-04-11 00:58:01 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:01.072128 | orchestrator | 2026-04-11 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:04.117590 | orchestrator | 2026-04-11 00:58:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:04.118788 | orchestrator | 2026-04-11 00:58:04 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:04.118847 | orchestrator | 2026-04-11 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:07.165637 | orchestrator | 2026-04-11 00:58:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:07.167614 | orchestrator | 2026-04-11 00:58:07 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:07.167697 | orchestrator | 2026-04-11 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:10.215055 | orchestrator | 2026-04-11 00:58:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:10.216719 | orchestrator | 2026-04-11 00:58:10 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:10.216955 | orchestrator | 2026-04-11 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:13.260915 | orchestrator | 2026-04-11 00:58:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:13.262472 | orchestrator | 2026-04-11 00:58:13 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:13.262664 | orchestrator | 2026-04-11 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:16.302121 | orchestrator | 2026-04-11 00:58:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:16.304170 | orchestrator | 2026-04-11 00:58:16 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:16.304300 | orchestrator | 2026-04-11 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:19.345916 | orchestrator | 2026-04-11 00:58:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:19.347167 | orchestrator | 2026-04-11 00:58:19 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:19.347203 | orchestrator | 2026-04-11 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:22.389161 | orchestrator | 2026-04-11 00:58:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:22.391005 | orchestrator | 2026-04-11 00:58:22 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:22.391085 | orchestrator | 2026-04-11 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:25.436650 | orchestrator | 2026-04-11 00:58:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:25.439906 | orchestrator | 2026-04-11 00:58:25 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:25.439981 | orchestrator | 2026-04-11 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:28.479417 | orchestrator | 2026-04-11 00:58:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:28.481150 | orchestrator | 2026-04-11 00:58:28 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:28.481246 | orchestrator | 2026-04-11 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:31.524860 | orchestrator | 2026-04-11 00:58:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:31.527009 | orchestrator | 2026-04-11 00:58:31 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:31.527089 | orchestrator | 2026-04-11 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:34.571943 | orchestrator | 2026-04-11 00:58:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:34.573362 | orchestrator | 2026-04-11 00:58:34 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:34.573434 | orchestrator | 2026-04-11 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:37.617502 | orchestrator | 2026-04-11 00:58:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:37.619085 | orchestrator | 2026-04-11 00:58:37 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state STARTED 2026-04-11 00:58:37.619350 | orchestrator | 2026-04-11 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:40.675942 | orchestrator | 2026-04-11 00:58:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:40.680548 | orchestrator | 2026-04-11 00:58:40 | INFO  | Task 5a21642b-5437-4bd5-bae7-65e65c3f1f9f is in state SUCCESS 2026-04-11 00:58:40.682771 | orchestrator | 2026-04-11 00:58:40.682853 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 00:58:40.682864 | orchestrator | 2.16.14 2026-04-11 00:58:40.682879 | orchestrator | 2026-04-11 00:58:40.682887 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-11 00:58:40.682895 | orchestrator | 2026-04-11 00:58:40.682902 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 00:58:40.682909 | orchestrator | Saturday 11 April 2026 00:56:35 +0000 (0:00:00.570) 0:00:00.570 ******** 2026-04-11 00:58:40.682916 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:58:40.682921 | orchestrator | 2026-04-11 00:58:40.682925 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 00:58:40.682929 | orchestrator | Saturday 11 April 2026 00:56:35 +0000 (0:00:00.586) 0:00:01.156 ******** 2026-04-11 00:58:40.682933 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.682937 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.682942 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.682945 | orchestrator | 2026-04-11 00:58:40.682949 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 00:58:40.682953 | orchestrator | Saturday 11 April 2026 00:56:36 +0000 (0:00:00.879) 0:00:02.036 ******** 2026-04-11 00:58:40.682957 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.682960 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.682964 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.682968 | orchestrator | 2026-04-11 00:58:40.682971 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 00:58:40.682975 | orchestrator | Saturday 11 April 2026 00:56:36 +0000 (0:00:00.258) 0:00:02.294 ******** 2026-04-11 00:58:40.682979 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.682983 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.682986 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.682990 | orchestrator | 2026-04-11 00:58:40.682994 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 00:58:40.682997 | orchestrator | Saturday 11 April 2026 00:56:37 +0000 (0:00:00.690) 0:00:02.985 ******** 2026-04-11 00:58:40.683001 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.683005 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.683008 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.683012 | orchestrator | 2026-04-11 00:58:40.683016 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 00:58:40.683019 | orchestrator | Saturday 11 April 2026 00:56:37 +0000 (0:00:00.251) 0:00:03.237 ******** 2026-04-11 00:58:40.683023 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.683027 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.683031 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.683034 | orchestrator | 2026-04-11 00:58:40.683038 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 00:58:40.683042 | orchestrator | Saturday 11 April 2026 00:56:37 +0000 (0:00:00.244) 0:00:03.482 ******** 2026-04-11 00:58:40.683393 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.683404 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.683408 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.683412 | orchestrator | 2026-04-11 00:58:40.683416 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 00:58:40.683421 | orchestrator | Saturday 11 April 2026 00:56:38 +0000 (0:00:00.263) 0:00:03.746 ******** 2026-04-11 00:58:40.683425 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.683430 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.683434 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.683438 | orchestrator | 2026-04-11 00:58:40.683458 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 00:58:40.683463 | orchestrator | Saturday 11 April 2026 00:56:38 +0000 (0:00:00.420) 0:00:04.166 ******** 2026-04-11 00:58:40.683467 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.683471 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.683475 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.683479 | orchestrator | 2026-04-11 00:58:40.683483 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 00:58:40.683486 | orchestrator | Saturday 11 April 2026 00:56:38 +0000 (0:00:00.253) 0:00:04.420 ******** 2026-04-11 00:58:40.683490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:58:40.683494 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:58:40.683498 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:58:40.683502 | orchestrator | 2026-04-11 00:58:40.683506 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 00:58:40.683510 | orchestrator | Saturday 11 April 2026 00:56:39 +0000 (0:00:00.595) 0:00:05.016 ******** 2026-04-11 00:58:40.683513 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.683517 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.683521 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.683568 | orchestrator | 2026-04-11 00:58:40.683573 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 00:58:40.683577 | orchestrator | Saturday 11 April 2026 00:56:39 +0000 (0:00:00.376) 0:00:05.393 ******** 2026-04-11 00:58:40.683581 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:58:40.683584 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:58:40.683588 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:58:40.683592 | orchestrator | 2026-04-11 00:58:40.683739 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 00:58:40.683753 | orchestrator | Saturday 11 April 2026 00:56:42 +0000 (0:00:03.125) 0:00:08.518 ******** 2026-04-11 00:58:40.683758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 00:58:40.683762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 00:58:40.683766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 00:58:40.683770 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.683774 | orchestrator | 2026-04-11 00:58:40.683800 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 00:58:40.683804 | orchestrator | Saturday 11 April 2026 00:56:43 +0000 (0:00:00.351) 0:00:08.869 ******** 2026-04-11 00:58:40.683810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.683820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.683824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.683828 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.683832 | orchestrator | 2026-04-11 00:58:40.683836 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 00:58:40.683839 | orchestrator | Saturday 11 April 2026 00:56:43 +0000 (0:00:00.669) 0:00:09.539 ******** 2026-04-11 00:58:40.683845 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.683885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.683891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.683895 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.683899 | orchestrator | 2026-04-11 00:58:40.683903 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 00:58:40.683906 | orchestrator | Saturday 11 April 2026 00:56:44 +0000 (0:00:00.134) 0:00:09.673 ******** 2026-04-11 00:58:40.683912 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1578c4e41d94', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 00:56:40.812797', 'end': '2026-04-11 00:56:40.846009', 'delta': '0:00:00.033212', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1578c4e41d94'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 00:58:40.683920 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '22eb5f35f42d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 00:56:41.913806', 'end': '2026-04-11 00:56:41.950040', 'delta': '0:00:00.036234', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['22eb5f35f42d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 00:58:40.683960 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2e9951bb6a0f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 00:56:42.782778', 'end': '2026-04-11 00:56:42.824036', 'delta': '0:00:00.041258', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2e9951bb6a0f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 00:58:40.683965 | orchestrator | 2026-04-11 00:58:40.683969 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 00:58:40.683978 | orchestrator | Saturday 11 April 2026 00:56:44 +0000 (0:00:00.307) 0:00:09.981 ******** 2026-04-11 00:58:40.683981 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.683985 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.683989 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.683993 | orchestrator | 2026-04-11 00:58:40.683997 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 00:58:40.684000 | orchestrator | Saturday 11 April 2026 00:56:44 +0000 (0:00:00.391) 0:00:10.373 ******** 2026-04-11 00:58:40.684004 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-11 00:58:40.684008 | orchestrator | 2026-04-11 00:58:40.684012 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 00:58:40.684015 | orchestrator | Saturday 11 April 2026 00:56:46 +0000 (0:00:01.615) 0:00:11.988 ******** 2026-04-11 00:58:40.684019 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684023 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684027 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684030 | orchestrator | 2026-04-11 00:58:40.684034 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 00:58:40.684038 | orchestrator | Saturday 11 April 2026 00:56:46 +0000 (0:00:00.288) 0:00:12.276 ******** 2026-04-11 00:58:40.684042 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684045 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684049 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684053 | orchestrator | 2026-04-11 00:58:40.684057 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 00:58:40.684060 | orchestrator | Saturday 11 April 2026 00:56:47 +0000 (0:00:00.466) 0:00:12.743 ******** 2026-04-11 00:58:40.684064 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684068 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684072 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684076 | orchestrator | 2026-04-11 00:58:40.684079 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 00:58:40.684083 | orchestrator | Saturday 11 April 2026 00:56:47 +0000 (0:00:00.366) 0:00:13.109 ******** 2026-04-11 00:58:40.684087 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.684091 | orchestrator | 2026-04-11 00:58:40.684094 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 00:58:40.684098 | orchestrator | Saturday 11 April 2026 00:56:47 +0000 (0:00:00.125) 0:00:13.235 ******** 2026-04-11 00:58:40.684102 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684106 | orchestrator | 2026-04-11 00:58:40.684109 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 00:58:40.684113 | orchestrator | Saturday 11 April 2026 00:56:47 +0000 (0:00:00.211) 0:00:13.447 ******** 2026-04-11 00:58:40.684117 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684121 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684124 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684128 | orchestrator | 2026-04-11 00:58:40.684132 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 00:58:40.684135 | orchestrator | Saturday 11 April 2026 00:56:48 +0000 (0:00:00.294) 0:00:13.741 ******** 2026-04-11 00:58:40.684139 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684143 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684146 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684150 | orchestrator | 2026-04-11 00:58:40.684154 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 00:58:40.684158 | orchestrator | Saturday 11 April 2026 00:56:48 +0000 (0:00:00.321) 0:00:14.063 ******** 2026-04-11 00:58:40.684179 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684183 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684186 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684190 | orchestrator | 2026-04-11 00:58:40.684194 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 00:58:40.684202 | orchestrator | Saturday 11 April 2026 00:56:48 +0000 (0:00:00.382) 0:00:14.445 ******** 2026-04-11 00:58:40.684206 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684210 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684214 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684218 | orchestrator | 2026-04-11 00:58:40.684222 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 00:58:40.684225 | orchestrator | Saturday 11 April 2026 00:56:49 +0000 (0:00:00.310) 0:00:14.756 ******** 2026-04-11 00:58:40.684229 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684233 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684237 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684240 | orchestrator | 2026-04-11 00:58:40.684244 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 00:58:40.684248 | orchestrator | Saturday 11 April 2026 00:56:49 +0000 (0:00:00.281) 0:00:15.037 ******** 2026-04-11 00:58:40.684252 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684256 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684260 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684277 | orchestrator | 2026-04-11 00:58:40.684282 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 00:58:40.684286 | orchestrator | Saturday 11 April 2026 00:56:49 +0000 (0:00:00.277) 0:00:15.314 ******** 2026-04-11 00:58:40.684290 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684294 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684298 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684301 | orchestrator | 2026-04-11 00:58:40.684305 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 00:58:40.684312 | orchestrator | Saturday 11 April 2026 00:56:50 +0000 (0:00:00.393) 0:00:15.708 ******** 2026-04-11 00:58:40.684317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e341cbc8--c5ec--51c3--80df--3957b9d9208d-osd--block--e341cbc8--c5ec--51c3--80df--3957b9d9208d', 'dm-uuid-LVM-S0wDmmgSvTIVfiwEGF0g1F0CnemPmWNqM9F7fgHWifJ2wCsLOyzxcm4ZH2xKiHNJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11d3af0d--f4a7--5343--8eab--aab153531328-osd--block--11d3af0d--f4a7--5343--8eab--aab153531328', 'dm-uuid-LVM-tDUd342fe4ZnfDc8Wu8UNw3aUoGr3BRp821E7R7E3xk6XXij1PFL9yNGwrqylWZT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55dbb7e6--b9f3--5799--9073--e4231f2b06a1-osd--block--55dbb7e6--b9f3--5799--9073--e4231f2b06a1', 'dm-uuid-LVM-q4aABbXV26TQdr8S8cV0O5GfkzomOoD2vACehG4dYj4ziiqzz3denVOa1N38WtSr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--296a8e73--f310--5078--85ef--ed795f213448-osd--block--296a8e73--f310--5078--85ef--ed795f213448', 'dm-uuid-LVM-PiByfQxnJtnDQmV6t0YalLZV3p17iPBsOgsdbURyA1D5C5469CzdNcaWVBWmTopI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part1', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part14', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part15', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part16', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e341cbc8--c5ec--51c3--80df--3957b9d9208d-osd--block--e341cbc8--c5ec--51c3--80df--3957b9d9208d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-olOe9G-ekTw-7Vdj-LpbS-YSDd-fjEj-vmBB6x', 'scsi-0QEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684', 'scsi-SQEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--11d3af0d--f4a7--5343--8eab--aab153531328-osd--block--11d3af0d--f4a7--5343--8eab--aab153531328'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Dyo5Vg-NZ73-VPuR-egkh-6Fdi-S64F-ngOBFV', 'scsi-0QEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08', 'scsi-SQEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547', 'scsi-SQEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684549 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--55dbb7e6--b9f3--5799--9073--e4231f2b06a1-osd--block--55dbb7e6--b9f3--5799--9073--e4231f2b06a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HA4VZ2-mxEg-9HSl-BTOI-sGBM-yJfd-TmKx1R', 'scsi-0QEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5', 'scsi-SQEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3fdd62a--f314--5cc9--917d--dd1b13b97b05-osd--block--b3fdd62a--f314--5cc9--917d--dd1b13b97b05', 'dm-uuid-LVM-9bViJP2WnhEwOKK4t0xE1sWvZ4XgBYlv87Aq0YZ8iV3pvrtVTiQyuFt7x0O0f2BF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--296a8e73--f310--5078--85ef--ed795f213448-osd--block--296a8e73--f310--5078--85ef--ed795f213448'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qhUFuc-8WYQ-4Jlf-4HeJ-A0U0-sNN1-ZGuRig', 'scsi-0QEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86', 'scsi-SQEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d143ac2--4627--58f1--a5c6--72fac7fe3e62-osd--block--1d143ac2--4627--58f1--a5c6--72fac7fe3e62', 'dm-uuid-LVM-22G0szTE25Xt7DPLpIaRBVJHOyB57KhVqhCHe8dePTS9MLDWlV9OFtlgP4roAWwd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b', 'scsi-SQEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684658 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.684664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 00:58:40.684728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part14', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part15', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part16', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b3fdd62a--f314--5cc9--917d--dd1b13b97b05-osd--block--b3fdd62a--f314--5cc9--917d--dd1b13b97b05'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Db0l3X-fLkZ-wodJ-EOWN-hKEx-fTZd-L0AKc1', 'scsi-0QEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9', 'scsi-SQEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1d143ac2--4627--58f1--a5c6--72fac7fe3e62-osd--block--1d143ac2--4627--58f1--a5c6--72fac7fe3e62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AUSRl6-Q0zh-jZKV-Fqhx-ykrt-rr44-JZ8eGj', 'scsi-0QEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995', 'scsi-SQEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514', 'scsi-SQEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 00:58:40.684763 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.684767 | orchestrator | 2026-04-11 00:58:40.684771 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 00:58:40.684777 | orchestrator | Saturday 11 April 2026 00:56:50 +0000 (0:00:00.478) 0:00:16.187 ******** 2026-04-11 00:58:40.684783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e341cbc8--c5ec--51c3--80df--3957b9d9208d-osd--block--e341cbc8--c5ec--51c3--80df--3957b9d9208d', 'dm-uuid-LVM-S0wDmmgSvTIVfiwEGF0g1F0CnemPmWNqM9F7fgHWifJ2wCsLOyzxcm4ZH2xKiHNJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11d3af0d--f4a7--5343--8eab--aab153531328-osd--block--11d3af0d--f4a7--5343--8eab--aab153531328', 'dm-uuid-LVM-tDUd342fe4ZnfDc8Wu8UNw3aUoGr3BRp821E7R7E3xk6XXij1PFL9yNGwrqylWZT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684812 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684826 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55dbb7e6--b9f3--5799--9073--e4231f2b06a1-osd--block--55dbb7e6--b9f3--5799--9073--e4231f2b06a1', 'dm-uuid-LVM-q4aABbXV26TQdr8S8cV0O5GfkzomOoD2vACehG4dYj4ziiqzz3denVOa1N38WtSr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684845 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--296a8e73--f310--5078--85ef--ed795f213448-osd--block--296a8e73--f310--5078--85ef--ed795f213448', 'dm-uuid-LVM-PiByfQxnJtnDQmV6t0YalLZV3p17iPBsOgsdbURyA1D5C5469CzdNcaWVBWmTopI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part1', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part14', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part15', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part16', 'scsi-SQEMU_QEMU_HARDDISK_d951abfa-6c54-4adb-91f6-ed79e55ad62b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684859 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e341cbc8--c5ec--51c3--80df--3957b9d9208d-osd--block--e341cbc8--c5ec--51c3--80df--3957b9d9208d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-olOe9G-ekTw-7Vdj-LpbS-YSDd-fjEj-vmBB6x', 'scsi-0QEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684', 'scsi-SQEMU_QEMU_HARDDISK_dbdf8861-c941-4e45-ade6-c92a4c866684'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684874 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--11d3af0d--f4a7--5343--8eab--aab153531328-osd--block--11d3af0d--f4a7--5343--8eab--aab153531328'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Dyo5Vg-NZ73-VPuR-egkh-6Fdi-S64F-ngOBFV', 'scsi-0QEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08', 'scsi-SQEMU_QEMU_HARDDISK_baefa5b6-df7d-4524-a048-85b2fa508a08'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547', 'scsi-SQEMU_QEMU_HARDDISK_766868e9-fa25-49fa-8c1f-44c6e6ffb547'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684915 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684919 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.684923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_19f6c328-a561-4897-bc92-2bbedec03b22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3fdd62a--f314--5cc9--917d--dd1b13b97b05-osd--block--b3fdd62a--f314--5cc9--917d--dd1b13b97b05', 'dm-uuid-LVM-9bViJP2WnhEwOKK4t0xE1sWvZ4XgBYlv87Aq0YZ8iV3pvrtVTiQyuFt7x0O0f2BF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--55dbb7e6--b9f3--5799--9073--e4231f2b06a1-osd--block--55dbb7e6--b9f3--5799--9073--e4231f2b06a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HA4VZ2-mxEg-9HSl-BTOI-sGBM-yJfd-TmKx1R', 'scsi-0QEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5', 'scsi-SQEMU_QEMU_HARDDISK_94d6c339-d22c-4552-bea0-bfaa215148a5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684954 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d143ac2--4627--58f1--a5c6--72fac7fe3e62-osd--block--1d143ac2--4627--58f1--a5c6--72fac7fe3e62', 'dm-uuid-LVM-22G0szTE25Xt7DPLpIaRBVJHOyB57KhVqhCHe8dePTS9MLDWlV9OFtlgP4roAWwd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--296a8e73--f310--5078--85ef--ed795f213448-osd--block--296a8e73--f310--5078--85ef--ed795f213448'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qhUFuc-8WYQ-4Jlf-4HeJ-A0U0-sNN1-ZGuRig', 'scsi-0QEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86', 'scsi-SQEMU_QEMU_HARDDISK_5beaa40e-2504-4a54-8b0e-fa95ef0fce86'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b', 'scsi-SQEMU_QEMU_HARDDISK_73d9f47d-c224-4fc5-85fe-6159d35bcb3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.684998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685001 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685019 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685027 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685031 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685042 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part14', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part15', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part16', 'scsi-SQEMU_QEMU_HARDDISK_b25588ff-c947-4792-a589-2d084d0eef3c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685050 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b3fdd62a--f314--5cc9--917d--dd1b13b97b05-osd--block--b3fdd62a--f314--5cc9--917d--dd1b13b97b05'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Db0l3X-fLkZ-wodJ-EOWN-hKEx-fTZd-L0AKc1', 'scsi-0QEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9', 'scsi-SQEMU_QEMU_HARDDISK_95d65739-9dbf-4d48-9a83-eea9c175e7b9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685054 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1d143ac2--4627--58f1--a5c6--72fac7fe3e62-osd--block--1d143ac2--4627--58f1--a5c6--72fac7fe3e62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AUSRl6-Q0zh-jZKV-Fqhx-ykrt-rr44-JZ8eGj', 'scsi-0QEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995', 'scsi-SQEMU_QEMU_HARDDISK_f9459d28-2556-4a8a-b254-3aab23978995'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514', 'scsi-SQEMU_QEMU_HARDDISK_a6786cac-aebf-4828-8780-9d987947f514'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 00:58:40.685072 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.685076 | orchestrator | 2026-04-11 00:58:40.685083 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 00:58:40.685087 | orchestrator | Saturday 11 April 2026 00:56:51 +0000 (0:00:00.517) 0:00:16.704 ******** 2026-04-11 00:58:40.685091 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.685095 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.685099 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.685103 | orchestrator | 2026-04-11 00:58:40.685107 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 00:58:40.685110 | orchestrator | Saturday 11 April 2026 00:56:51 +0000 (0:00:00.609) 0:00:17.313 ******** 2026-04-11 00:58:40.685114 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.685118 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.685122 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.685125 | orchestrator | 2026-04-11 00:58:40.685129 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 00:58:40.685133 | orchestrator | Saturday 11 April 2026 00:56:52 +0000 (0:00:00.428) 0:00:17.742 ******** 2026-04-11 00:58:40.685137 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.685140 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.685144 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.685148 | orchestrator | 2026-04-11 00:58:40.685152 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 00:58:40.685155 | orchestrator | Saturday 11 April 2026 00:56:52 +0000 (0:00:00.590) 0:00:18.332 ******** 2026-04-11 00:58:40.685218 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685223 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685227 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.685231 | orchestrator | 2026-04-11 00:58:40.685235 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 00:58:40.685239 | orchestrator | Saturday 11 April 2026 00:56:53 +0000 (0:00:00.250) 0:00:18.583 ******** 2026-04-11 00:58:40.685243 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685247 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685251 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.685254 | orchestrator | 2026-04-11 00:58:40.685258 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 00:58:40.685262 | orchestrator | Saturday 11 April 2026 00:56:53 +0000 (0:00:00.427) 0:00:19.011 ******** 2026-04-11 00:58:40.685266 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685269 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685273 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.685277 | orchestrator | 2026-04-11 00:58:40.685281 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 00:58:40.685284 | orchestrator | Saturday 11 April 2026 00:56:53 +0000 (0:00:00.531) 0:00:19.542 ******** 2026-04-11 00:58:40.685288 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-11 00:58:40.685292 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-11 00:58:40.685296 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-11 00:58:40.685300 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-11 00:58:40.685305 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-11 00:58:40.685310 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-11 00:58:40.685322 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-11 00:58:40.685328 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-11 00:58:40.685333 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-11 00:58:40.685340 | orchestrator | 2026-04-11 00:58:40.685346 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 00:58:40.685353 | orchestrator | Saturday 11 April 2026 00:56:54 +0000 (0:00:00.823) 0:00:20.365 ******** 2026-04-11 00:58:40.685360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 00:58:40.685367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 00:58:40.685373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 00:58:40.685380 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685384 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 00:58:40.685388 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 00:58:40.685392 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 00:58:40.685395 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 00:58:40.685403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 00:58:40.685406 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 00:58:40.685410 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.685414 | orchestrator | 2026-04-11 00:58:40.685418 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 00:58:40.685422 | orchestrator | Saturday 11 April 2026 00:56:55 +0000 (0:00:00.374) 0:00:20.740 ******** 2026-04-11 00:58:40.685426 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 00:58:40.685430 | orchestrator | 2026-04-11 00:58:40.685434 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 00:58:40.685439 | orchestrator | Saturday 11 April 2026 00:56:55 +0000 (0:00:00.671) 0:00:21.412 ******** 2026-04-11 00:58:40.685447 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685451 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685454 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.685458 | orchestrator | 2026-04-11 00:58:40.685462 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 00:58:40.685466 | orchestrator | Saturday 11 April 2026 00:56:56 +0000 (0:00:00.302) 0:00:21.715 ******** 2026-04-11 00:58:40.685469 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685473 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685479 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.685485 | orchestrator | 2026-04-11 00:58:40.685496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 00:58:40.685506 | orchestrator | Saturday 11 April 2026 00:56:56 +0000 (0:00:00.291) 0:00:22.006 ******** 2026-04-11 00:58:40.685512 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685518 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685523 | orchestrator | skipping: [testbed-node-5] 2026-04-11 00:58:40.685529 | orchestrator | 2026-04-11 00:58:40.685535 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 00:58:40.685541 | orchestrator | Saturday 11 April 2026 00:56:56 +0000 (0:00:00.308) 0:00:22.314 ******** 2026-04-11 00:58:40.685546 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.685552 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.685557 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.685563 | orchestrator | 2026-04-11 00:58:40.685568 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 00:58:40.685574 | orchestrator | Saturday 11 April 2026 00:56:57 +0000 (0:00:00.617) 0:00:22.932 ******** 2026-04-11 00:58:40.685580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:58:40.685592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:58:40.685598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:58:40.685603 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685609 | orchestrator | 2026-04-11 00:58:40.685616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 00:58:40.685621 | orchestrator | Saturday 11 April 2026 00:56:57 +0000 (0:00:00.374) 0:00:23.306 ******** 2026-04-11 00:58:40.685627 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:58:40.685633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:58:40.685639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:58:40.685646 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685651 | orchestrator | 2026-04-11 00:58:40.685659 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 00:58:40.685663 | orchestrator | Saturday 11 April 2026 00:56:58 +0000 (0:00:00.421) 0:00:23.727 ******** 2026-04-11 00:58:40.685667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 00:58:40.685671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 00:58:40.685675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 00:58:40.685678 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685682 | orchestrator | 2026-04-11 00:58:40.685686 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 00:58:40.685690 | orchestrator | Saturday 11 April 2026 00:56:58 +0000 (0:00:00.407) 0:00:24.135 ******** 2026-04-11 00:58:40.685693 | orchestrator | ok: [testbed-node-3] 2026-04-11 00:58:40.685697 | orchestrator | ok: [testbed-node-4] 2026-04-11 00:58:40.685701 | orchestrator | ok: [testbed-node-5] 2026-04-11 00:58:40.685705 | orchestrator | 2026-04-11 00:58:40.685709 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 00:58:40.685712 | orchestrator | Saturday 11 April 2026 00:56:58 +0000 (0:00:00.313) 0:00:24.449 ******** 2026-04-11 00:58:40.685716 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 00:58:40.685720 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 00:58:40.685724 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 00:58:40.685728 | orchestrator | 2026-04-11 00:58:40.685731 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 00:58:40.685735 | orchestrator | Saturday 11 April 2026 00:56:59 +0000 (0:00:00.539) 0:00:24.989 ******** 2026-04-11 00:58:40.685739 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:58:40.685743 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:58:40.685747 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:58:40.685751 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 00:58:40.685755 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 00:58:40.685759 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 00:58:40.685763 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 00:58:40.685766 | orchestrator | 2026-04-11 00:58:40.685770 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 00:58:40.685774 | orchestrator | Saturday 11 April 2026 00:57:00 +0000 (0:00:00.994) 0:00:25.984 ******** 2026-04-11 00:58:40.685778 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 00:58:40.685782 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 00:58:40.685785 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 00:58:40.685789 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 00:58:40.685797 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 00:58:40.685801 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 00:58:40.685809 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 00:58:40.685813 | orchestrator | 2026-04-11 00:58:40.685817 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-11 00:58:40.685820 | orchestrator | Saturday 11 April 2026 00:57:02 +0000 (0:00:01.874) 0:00:27.858 ******** 2026-04-11 00:58:40.685824 | orchestrator | skipping: [testbed-node-3] 2026-04-11 00:58:40.685828 | orchestrator | skipping: [testbed-node-4] 2026-04-11 00:58:40.685832 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-11 00:58:40.685836 | orchestrator | 2026-04-11 00:58:40.685846 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-11 00:58:40.685850 | orchestrator | Saturday 11 April 2026 00:57:02 +0000 (0:00:00.347) 0:00:28.206 ******** 2026-04-11 00:58:40.685855 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 00:58:40.685860 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 00:58:40.685864 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 00:58:40.685868 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 00:58:40.685872 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 00:58:40.685875 | orchestrator | 2026-04-11 00:58:40.685879 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-11 00:58:40.685883 | orchestrator | Saturday 11 April 2026 00:57:45 +0000 (0:00:43.176) 0:01:11.382 ******** 2026-04-11 00:58:40.685887 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685890 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685894 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685898 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685902 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685906 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685909 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-11 00:58:40.685913 | orchestrator | 2026-04-11 00:58:40.685917 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-11 00:58:40.685921 | orchestrator | Saturday 11 April 2026 00:58:09 +0000 (0:00:23.306) 0:01:34.688 ******** 2026-04-11 00:58:40.685925 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685933 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685937 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685941 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685945 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685948 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685952 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 00:58:40.685956 | orchestrator | 2026-04-11 00:58:40.685960 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-11 00:58:40.685963 | orchestrator | Saturday 11 April 2026 00:58:21 +0000 (0:00:11.927) 0:01:46.615 ******** 2026-04-11 00:58:40.685967 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685971 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 00:58:40.685975 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 00:58:40.685979 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685983 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 00:58:40.685990 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 00:58:40.685994 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.685998 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 00:58:40.686002 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 00:58:40.686005 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.686012 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 00:58:40.686070 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 00:58:40.686075 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.686078 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 00:58:40.686082 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 00:58:40.686086 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 00:58:40.686090 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 00:58:40.686094 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 00:58:40.686098 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-11 00:58:40.686102 | orchestrator | 2026-04-11 00:58:40.686105 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 00:58:40.686109 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-11 00:58:40.686115 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-11 00:58:40.686120 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-11 00:58:40.686124 | orchestrator | 2026-04-11 00:58:40.686127 | orchestrator | 2026-04-11 00:58:40.686131 | orchestrator | 2026-04-11 00:58:40.686135 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 00:58:40.686139 | orchestrator | Saturday 11 April 2026 00:58:38 +0000 (0:00:17.342) 0:02:03.958 ******** 2026-04-11 00:58:40.686147 | orchestrator | =============================================================================== 2026-04-11 00:58:40.686151 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.18s 2026-04-11 00:58:40.686155 | orchestrator | generate keys ---------------------------------------------------------- 23.31s 2026-04-11 00:58:40.686195 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.34s 2026-04-11 00:58:40.686201 | orchestrator | get keys from monitors ------------------------------------------------- 11.93s 2026-04-11 00:58:40.686205 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.13s 2026-04-11 00:58:40.686209 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.87s 2026-04-11 00:58:40.686212 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.62s 2026-04-11 00:58:40.686216 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.99s 2026-04-11 00:58:40.686220 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.88s 2026-04-11 00:58:40.686223 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2026-04-11 00:58:40.686227 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.69s 2026-04-11 00:58:40.686231 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2026-04-11 00:58:40.686235 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.67s 2026-04-11 00:58:40.686239 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2026-04-11 00:58:40.686243 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.61s 2026-04-11 00:58:40.686247 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2026-04-11 00:58:40.686250 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.59s 2026-04-11 00:58:40.686254 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.59s 2026-04-11 00:58:40.686258 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.54s 2026-04-11 00:58:40.686261 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.53s 2026-04-11 00:58:40.686265 | orchestrator | 2026-04-11 00:58:40 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:58:40.686269 | orchestrator | 2026-04-11 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:43.735668 | orchestrator | 2026-04-11 00:58:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:43.738988 | orchestrator | 2026-04-11 00:58:43 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:58:43.739054 | orchestrator | 2026-04-11 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:46.780952 | orchestrator | 2026-04-11 00:58:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:46.784032 | orchestrator | 2026-04-11 00:58:46 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:58:46.784282 | orchestrator | 2026-04-11 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:49.827876 | orchestrator | 2026-04-11 00:58:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:49.829911 | orchestrator | 2026-04-11 00:58:49 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:58:49.829965 | orchestrator | 2026-04-11 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:52.870548 | orchestrator | 2026-04-11 00:58:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:52.872825 | orchestrator | 2026-04-11 00:58:52 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:58:52.872904 | orchestrator | 2026-04-11 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:55.921024 | orchestrator | 2026-04-11 00:58:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:55.922685 | orchestrator | 2026-04-11 00:58:55 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:58:55.922885 | orchestrator | 2026-04-11 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:58:58.966746 | orchestrator | 2026-04-11 00:58:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:58:58.968797 | orchestrator | 2026-04-11 00:58:58 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:58:58.969014 | orchestrator | 2026-04-11 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:02.023536 | orchestrator | 2026-04-11 00:59:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:02.023991 | orchestrator | 2026-04-11 00:59:02 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:59:02.024399 | orchestrator | 2026-04-11 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:05.072560 | orchestrator | 2026-04-11 00:59:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:05.073569 | orchestrator | 2026-04-11 00:59:05 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:59:05.073773 | orchestrator | 2026-04-11 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:08.122839 | orchestrator | 2026-04-11 00:59:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:08.122917 | orchestrator | 2026-04-11 00:59:08 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:59:08.122940 | orchestrator | 2026-04-11 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:11.163909 | orchestrator | 2026-04-11 00:59:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:11.165044 | orchestrator | 2026-04-11 00:59:11 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:59:11.165192 | orchestrator | 2026-04-11 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:14.217366 | orchestrator | 2026-04-11 00:59:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:14.217457 | orchestrator | 2026-04-11 00:59:14 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state STARTED 2026-04-11 00:59:14.217795 | orchestrator | 2026-04-11 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:17.272118 | orchestrator | 2026-04-11 00:59:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:17.273672 | orchestrator | 2026-04-11 00:59:17 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:17.275360 | orchestrator | 2026-04-11 00:59:17 | INFO  | Task 0423c5bb-fbe2-4caa-acfd-6dedc5346e0d is in state SUCCESS 2026-04-11 00:59:17.275793 | orchestrator | 2026-04-11 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:20.314594 | orchestrator | 2026-04-11 00:59:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:20.315961 | orchestrator | 2026-04-11 00:59:20 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:20.316012 | orchestrator | 2026-04-11 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:23.355392 | orchestrator | 2026-04-11 00:59:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:23.355529 | orchestrator | 2026-04-11 00:59:23 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:23.355556 | orchestrator | 2026-04-11 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:26.395942 | orchestrator | 2026-04-11 00:59:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:26.397774 | orchestrator | 2026-04-11 00:59:26 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:26.398406 | orchestrator | 2026-04-11 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:29.442425 | orchestrator | 2026-04-11 00:59:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:29.442576 | orchestrator | 2026-04-11 00:59:29 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:29.442601 | orchestrator | 2026-04-11 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:32.481173 | orchestrator | 2026-04-11 00:59:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:32.482658 | orchestrator | 2026-04-11 00:59:32 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:32.482767 | orchestrator | 2026-04-11 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:35.529816 | orchestrator | 2026-04-11 00:59:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:35.531584 | orchestrator | 2026-04-11 00:59:35 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:35.531646 | orchestrator | 2026-04-11 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:38.568095 | orchestrator | 2026-04-11 00:59:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:38.569332 | orchestrator | 2026-04-11 00:59:38 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:38.569381 | orchestrator | 2026-04-11 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:41.608642 | orchestrator | 2026-04-11 00:59:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:41.610304 | orchestrator | 2026-04-11 00:59:41 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:41.610358 | orchestrator | 2026-04-11 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:44.648823 | orchestrator | 2026-04-11 00:59:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:44.650791 | orchestrator | 2026-04-11 00:59:44 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:44.650863 | orchestrator | 2026-04-11 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:47.686335 | orchestrator | 2026-04-11 00:59:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:47.686494 | orchestrator | 2026-04-11 00:59:47 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:47.686515 | orchestrator | 2026-04-11 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:50.727151 | orchestrator | 2026-04-11 00:59:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:50.728473 | orchestrator | 2026-04-11 00:59:50 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:50.728623 | orchestrator | 2026-04-11 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:53.773442 | orchestrator | 2026-04-11 00:59:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:53.775891 | orchestrator | 2026-04-11 00:59:53 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:53.776057 | orchestrator | 2026-04-11 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:56.824083 | orchestrator | 2026-04-11 00:59:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:56.825921 | orchestrator | 2026-04-11 00:59:56 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:56.826055 | orchestrator | 2026-04-11 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 00:59:59.865609 | orchestrator | 2026-04-11 00:59:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 00:59:59.867542 | orchestrator | 2026-04-11 00:59:59 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 00:59:59.867602 | orchestrator | 2026-04-11 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:02.916230 | orchestrator | 2026-04-11 01:00:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:02.917887 | orchestrator | 2026-04-11 01:00:02 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 01:00:02.918061 | orchestrator | 2026-04-11 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:05.968842 | orchestrator | 2026-04-11 01:00:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:05.969707 | orchestrator | 2026-04-11 01:00:05 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 01:00:05.969842 | orchestrator | 2026-04-11 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:09.020685 | orchestrator | 2026-04-11 01:00:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:09.025008 | orchestrator | 2026-04-11 01:00:09 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 01:00:09.025064 | orchestrator | 2026-04-11 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:12.079048 | orchestrator | 2026-04-11 01:00:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:12.080089 | orchestrator | 2026-04-11 01:00:12 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state STARTED 2026-04-11 01:00:12.080223 | orchestrator | 2026-04-11 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:15.138241 | orchestrator | 2026-04-11 01:00:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:15.141645 | orchestrator | 2026-04-11 01:00:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:15.145407 | orchestrator | 2026-04-11 01:00:15.145489 | orchestrator | 2026-04-11 01:00:15.145496 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-11 01:00:15.145502 | orchestrator | 2026-04-11 01:00:15.145506 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-11 01:00:15.145510 | orchestrator | Saturday 11 April 2026 00:58:42 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-04-11 01:00:15.145515 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-11 01:00:15.145520 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145544 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145549 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 01:00:15.145553 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145557 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-11 01:00:15.145561 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-11 01:00:15.145565 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-11 01:00:15.145569 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-11 01:00:15.145573 | orchestrator | 2026-04-11 01:00:15.145577 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-11 01:00:15.145580 | orchestrator | Saturday 11 April 2026 00:58:46 +0000 (0:00:04.652) 0:00:04.883 ******** 2026-04-11 01:00:15.145584 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-11 01:00:15.145588 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145592 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145596 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 01:00:15.145599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145603 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-11 01:00:15.145607 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-11 01:00:15.145611 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-11 01:00:15.145614 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-11 01:00:15.145618 | orchestrator | 2026-04-11 01:00:15.145622 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-11 01:00:15.145625 | orchestrator | Saturday 11 April 2026 00:58:50 +0000 (0:00:04.241) 0:00:09.125 ******** 2026-04-11 01:00:15.145630 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 01:00:15.145634 | orchestrator | 2026-04-11 01:00:15.145638 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-11 01:00:15.145652 | orchestrator | Saturday 11 April 2026 00:58:51 +0000 (0:00:00.948) 0:00:10.074 ******** 2026-04-11 01:00:15.145657 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-11 01:00:15.145661 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145665 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145669 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 01:00:15.145673 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145677 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-11 01:00:15.145680 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-11 01:00:15.145684 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-11 01:00:15.145688 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-11 01:00:15.145692 | orchestrator | 2026-04-11 01:00:15.145695 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-11 01:00:15.145703 | orchestrator | Saturday 11 April 2026 00:59:04 +0000 (0:00:12.611) 0:00:22.685 ******** 2026-04-11 01:00:15.145707 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-11 01:00:15.145711 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-11 01:00:15.145715 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-11 01:00:15.145719 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-11 01:00:15.145734 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-11 01:00:15.145738 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-11 01:00:15.145744 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-11 01:00:15.145750 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-11 01:00:15.145756 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-11 01:00:15.145776 | orchestrator | 2026-04-11 01:00:15.145791 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-11 01:00:15.145797 | orchestrator | Saturday 11 April 2026 00:59:07 +0000 (0:00:03.349) 0:00:26.034 ******** 2026-04-11 01:00:15.145804 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-11 01:00:15.145810 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145815 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145821 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 01:00:15.145827 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-11 01:00:15.145832 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-11 01:00:15.145838 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-11 01:00:15.145843 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-11 01:00:15.145849 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-11 01:00:15.145855 | orchestrator | 2026-04-11 01:00:15.145861 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:00:15.145867 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 01:00:15.145874 | orchestrator | 2026-04-11 01:00:15.145880 | orchestrator | 2026-04-11 01:00:15.145886 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 01:00:15.145892 | orchestrator | Saturday 11 April 2026 00:59:14 +0000 (0:00:06.927) 0:00:32.962 ******** 2026-04-11 01:00:15.145898 | orchestrator | =============================================================================== 2026-04-11 01:00:15.145905 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.61s 2026-04-11 01:00:15.145911 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.93s 2026-04-11 01:00:15.145918 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.65s 2026-04-11 01:00:15.145922 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.24s 2026-04-11 01:00:15.145944 | orchestrator | Check if target directories exist --------------------------------------- 3.35s 2026-04-11 01:00:15.145949 | orchestrator | Create share directory -------------------------------------------------- 0.95s 2026-04-11 01:00:15.145953 | orchestrator | 2026-04-11 01:00:15.145957 | orchestrator | 2026-04-11 01:00:15.145962 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-11 01:00:15.145972 | orchestrator | 2026-04-11 01:00:15.145976 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-11 01:00:15.145980 | orchestrator | Saturday 11 April 2026 00:59:18 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-04-11 01:00:15.145989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-11 01:00:15.145995 | orchestrator | 2026-04-11 01:00:15.145999 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-11 01:00:15.146003 | orchestrator | Saturday 11 April 2026 00:59:18 +0000 (0:00:00.220) 0:00:00.490 ******** 2026-04-11 01:00:15.146008 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-11 01:00:15.146046 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-11 01:00:15.146053 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-11 01:00:15.146057 | orchestrator | 2026-04-11 01:00:15.146061 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-11 01:00:15.146066 | orchestrator | Saturday 11 April 2026 00:59:19 +0000 (0:00:01.417) 0:00:01.907 ******** 2026-04-11 01:00:15.146071 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-11 01:00:15.146075 | orchestrator | 2026-04-11 01:00:15.146080 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-11 01:00:15.146085 | orchestrator | Saturday 11 April 2026 00:59:20 +0000 (0:00:01.023) 0:00:02.931 ******** 2026-04-11 01:00:15.146089 | orchestrator | changed: [testbed-manager] 2026-04-11 01:00:15.146094 | orchestrator | 2026-04-11 01:00:15.146099 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-11 01:00:15.146102 | orchestrator | Saturday 11 April 2026 00:59:21 +0000 (0:00:00.802) 0:00:03.734 ******** 2026-04-11 01:00:15.146106 | orchestrator | changed: [testbed-manager] 2026-04-11 01:00:15.146110 | orchestrator | 2026-04-11 01:00:15.146114 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-11 01:00:15.146117 | orchestrator | Saturday 11 April 2026 00:59:22 +0000 (0:00:00.805) 0:00:04.540 ******** 2026-04-11 01:00:15.146121 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-11 01:00:15.146125 | orchestrator | ok: [testbed-manager] 2026-04-11 01:00:15.146131 | orchestrator | 2026-04-11 01:00:15.146137 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-11 01:00:15.146151 | orchestrator | Saturday 11 April 2026 01:00:03 +0000 (0:00:40.960) 0:00:45.500 ******** 2026-04-11 01:00:15.146164 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-11 01:00:15.146171 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-11 01:00:15.146177 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-11 01:00:15.146184 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-11 01:00:15.146190 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-11 01:00:15.146196 | orchestrator | 2026-04-11 01:00:15.146202 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-11 01:00:15.146208 | orchestrator | Saturday 11 April 2026 01:00:07 +0000 (0:00:03.892) 0:00:49.393 ******** 2026-04-11 01:00:15.146214 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-11 01:00:15.146220 | orchestrator | 2026-04-11 01:00:15.146226 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-11 01:00:15.146233 | orchestrator | Saturday 11 April 2026 01:00:07 +0000 (0:00:00.601) 0:00:49.994 ******** 2026-04-11 01:00:15.146238 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:00:15.146243 | orchestrator | 2026-04-11 01:00:15.146249 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-11 01:00:15.146255 | orchestrator | Saturday 11 April 2026 01:00:07 +0000 (0:00:00.131) 0:00:50.126 ******** 2026-04-11 01:00:15.146261 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:00:15.146272 | orchestrator | 2026-04-11 01:00:15.146278 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-11 01:00:15.146283 | orchestrator | Saturday 11 April 2026 01:00:08 +0000 (0:00:00.315) 0:00:50.441 ******** 2026-04-11 01:00:15.146289 | orchestrator | changed: [testbed-manager] 2026-04-11 01:00:15.146294 | orchestrator | 2026-04-11 01:00:15.146300 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-11 01:00:15.146307 | orchestrator | Saturday 11 April 2026 01:00:09 +0000 (0:00:01.455) 0:00:51.896 ******** 2026-04-11 01:00:15.146313 | orchestrator | changed: [testbed-manager] 2026-04-11 01:00:15.146319 | orchestrator | 2026-04-11 01:00:15.146325 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-11 01:00:15.146331 | orchestrator | Saturday 11 April 2026 01:00:10 +0000 (0:00:00.660) 0:00:52.557 ******** 2026-04-11 01:00:15.146337 | orchestrator | changed: [testbed-manager] 2026-04-11 01:00:15.146349 | orchestrator | 2026-04-11 01:00:15.146359 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-11 01:00:15.146364 | orchestrator | Saturday 11 April 2026 01:00:10 +0000 (0:00:00.563) 0:00:53.120 ******** 2026-04-11 01:00:15.146370 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-11 01:00:15.146375 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-11 01:00:15.146382 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-11 01:00:15.146387 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-11 01:00:15.146393 | orchestrator | 2026-04-11 01:00:15.146403 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:00:15.146410 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 01:00:15.146416 | orchestrator | 2026-04-11 01:00:15.146539 | orchestrator | 2026-04-11 01:00:15.146553 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 01:00:15.146558 | orchestrator | Saturday 11 April 2026 01:00:12 +0000 (0:00:01.495) 0:00:54.616 ******** 2026-04-11 01:00:15.146564 | orchestrator | =============================================================================== 2026-04-11 01:00:15.146569 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.96s 2026-04-11 01:00:15.146575 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.89s 2026-04-11 01:00:15.146586 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.50s 2026-04-11 01:00:15.146591 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.46s 2026-04-11 01:00:15.146597 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.42s 2026-04-11 01:00:15.146602 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.02s 2026-04-11 01:00:15.146607 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.81s 2026-04-11 01:00:15.146613 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.80s 2026-04-11 01:00:15.146618 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.66s 2026-04-11 01:00:15.146624 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.60s 2026-04-11 01:00:15.146630 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2026-04-11 01:00:15.146636 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2026-04-11 01:00:15.146642 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-04-11 01:00:15.146649 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-11 01:00:15.146664 | orchestrator | 2026-04-11 01:00:15 | INFO  | Task 19d3e39b-6838-4ba3-a27a-9f8923d218ac is in state SUCCESS 2026-04-11 01:00:15.146679 | orchestrator | 2026-04-11 01:00:15 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:15.148606 | orchestrator | 2026-04-11 01:00:15 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:15.148675 | orchestrator | 2026-04-11 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:18.194355 | orchestrator | 2026-04-11 01:00:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:18.198087 | orchestrator | 2026-04-11 01:00:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:18.198395 | orchestrator | 2026-04-11 01:00:18 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:18.201015 | orchestrator | 2026-04-11 01:00:18 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:18.201073 | orchestrator | 2026-04-11 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:21.240721 | orchestrator | 2026-04-11 01:00:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:21.242641 | orchestrator | 2026-04-11 01:00:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:21.245374 | orchestrator | 2026-04-11 01:00:21 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:21.247437 | orchestrator | 2026-04-11 01:00:21 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:21.247637 | orchestrator | 2026-04-11 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:24.290632 | orchestrator | 2026-04-11 01:00:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:24.292410 | orchestrator | 2026-04-11 01:00:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:24.294165 | orchestrator | 2026-04-11 01:00:24 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:24.296210 | orchestrator | 2026-04-11 01:00:24 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:24.296262 | orchestrator | 2026-04-11 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:27.342588 | orchestrator | 2026-04-11 01:00:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:27.347900 | orchestrator | 2026-04-11 01:00:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:27.348836 | orchestrator | 2026-04-11 01:00:27 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:27.350960 | orchestrator | 2026-04-11 01:00:27 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:27.350988 | orchestrator | 2026-04-11 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:30.393830 | orchestrator | 2026-04-11 01:00:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:30.394525 | orchestrator | 2026-04-11 01:00:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:30.395742 | orchestrator | 2026-04-11 01:00:30 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:30.397699 | orchestrator | 2026-04-11 01:00:30 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:30.397755 | orchestrator | 2026-04-11 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:33.435398 | orchestrator | 2026-04-11 01:00:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:33.438261 | orchestrator | 2026-04-11 01:00:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:33.441557 | orchestrator | 2026-04-11 01:00:33 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:33.444571 | orchestrator | 2026-04-11 01:00:33 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:33.444645 | orchestrator | 2026-04-11 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:36.479219 | orchestrator | 2026-04-11 01:00:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:36.479300 | orchestrator | 2026-04-11 01:00:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:36.481173 | orchestrator | 2026-04-11 01:00:36 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:36.482173 | orchestrator | 2026-04-11 01:00:36 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:36.482227 | orchestrator | 2026-04-11 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:39.526172 | orchestrator | 2026-04-11 01:00:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:39.529060 | orchestrator | 2026-04-11 01:00:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:39.530423 | orchestrator | 2026-04-11 01:00:39 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:39.532098 | orchestrator | 2026-04-11 01:00:39 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:39.532252 | orchestrator | 2026-04-11 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:42.580115 | orchestrator | 2026-04-11 01:00:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:42.580174 | orchestrator | 2026-04-11 01:00:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:42.580818 | orchestrator | 2026-04-11 01:00:42 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:42.582540 | orchestrator | 2026-04-11 01:00:42 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:42.582565 | orchestrator | 2026-04-11 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:45.616034 | orchestrator | 2026-04-11 01:00:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:45.617107 | orchestrator | 2026-04-11 01:00:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:45.618329 | orchestrator | 2026-04-11 01:00:45 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:45.619495 | orchestrator | 2026-04-11 01:00:45 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:45.619533 | orchestrator | 2026-04-11 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:48.664472 | orchestrator | 2026-04-11 01:00:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:48.666163 | orchestrator | 2026-04-11 01:00:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:48.668541 | orchestrator | 2026-04-11 01:00:48 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:48.672421 | orchestrator | 2026-04-11 01:00:48 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:48.672484 | orchestrator | 2026-04-11 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:51.728539 | orchestrator | 2026-04-11 01:00:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:51.731870 | orchestrator | 2026-04-11 01:00:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:51.734356 | orchestrator | 2026-04-11 01:00:51 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:51.736870 | orchestrator | 2026-04-11 01:00:51 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:51.737280 | orchestrator | 2026-04-11 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:54.786102 | orchestrator | 2026-04-11 01:00:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:54.789624 | orchestrator | 2026-04-11 01:00:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:54.793827 | orchestrator | 2026-04-11 01:00:54 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:54.796277 | orchestrator | 2026-04-11 01:00:54 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:54.796618 | orchestrator | 2026-04-11 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:00:57.838342 | orchestrator | 2026-04-11 01:00:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:00:57.839885 | orchestrator | 2026-04-11 01:00:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:00:57.841847 | orchestrator | 2026-04-11 01:00:57 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:00:57.843758 | orchestrator | 2026-04-11 01:00:57 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:00:57.843792 | orchestrator | 2026-04-11 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:00.887048 | orchestrator | 2026-04-11 01:01:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:00.889361 | orchestrator | 2026-04-11 01:01:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:00.890166 | orchestrator | 2026-04-11 01:01:00 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:01:00.893320 | orchestrator | 2026-04-11 01:01:00 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:00.893361 | orchestrator | 2026-04-11 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:03.925871 | orchestrator | 2026-04-11 01:01:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:03.927278 | orchestrator | 2026-04-11 01:01:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:03.928620 | orchestrator | 2026-04-11 01:01:03 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:01:03.930153 | orchestrator | 2026-04-11 01:01:03 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:03.930190 | orchestrator | 2026-04-11 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:06.959566 | orchestrator | 2026-04-11 01:01:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:06.962305 | orchestrator | 2026-04-11 01:01:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:06.965180 | orchestrator | 2026-04-11 01:01:06 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:01:06.967318 | orchestrator | 2026-04-11 01:01:06 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:06.967747 | orchestrator | 2026-04-11 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:10.014655 | orchestrator | 2026-04-11 01:01:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:10.016639 | orchestrator | 2026-04-11 01:01:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:10.017535 | orchestrator | 2026-04-11 01:01:10 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:01:10.018606 | orchestrator | 2026-04-11 01:01:10 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:10.018631 | orchestrator | 2026-04-11 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:13.074318 | orchestrator | 2026-04-11 01:01:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:13.076118 | orchestrator | 2026-04-11 01:01:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:13.077968 | orchestrator | 2026-04-11 01:01:13 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:01:13.079626 | orchestrator | 2026-04-11 01:01:13 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:13.079796 | orchestrator | 2026-04-11 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:16.116921 | orchestrator | 2026-04-11 01:01:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:16.118818 | orchestrator | 2026-04-11 01:01:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:16.120125 | orchestrator | 2026-04-11 01:01:16 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:01:16.121853 | orchestrator | 2026-04-11 01:01:16 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:16.121892 | orchestrator | 2026-04-11 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:19.170770 | orchestrator | 2026-04-11 01:01:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:19.172171 | orchestrator | 2026-04-11 01:01:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:19.173737 | orchestrator | 2026-04-11 01:01:19 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:01:19.175498 | orchestrator | 2026-04-11 01:01:19 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:19.175537 | orchestrator | 2026-04-11 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:22.201159 | orchestrator | 2026-04-11 01:01:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:22.201464 | orchestrator | 2026-04-11 01:01:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:22.201878 | orchestrator | 2026-04-11 01:01:22 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state STARTED 2026-04-11 01:01:22.202519 | orchestrator | 2026-04-11 01:01:22 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:22.202540 | orchestrator | 2026-04-11 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:25.247050 | orchestrator | 2026-04-11 01:01:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:25.249880 | orchestrator | 2026-04-11 01:01:25 | INFO  | Task 4bd5c3ec-0bef-4f8a-9b61-a04544602b06 is in state STARTED 2026-04-11 01:01:25.252363 | orchestrator | 2026-04-11 01:01:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:25.258905 | orchestrator | 2026-04-11 01:01:25 | INFO  | Task 14b1f560-5611-4d45-acd3-2c664c5c9546 is in state SUCCESS 2026-04-11 01:01:25.261361 | orchestrator | 2026-04-11 01:01:25.261405 | orchestrator | 2026-04-11 01:01:25.261411 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 01:01:25.261416 | orchestrator | 2026-04-11 01:01:25.261420 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 01:01:25.261425 | orchestrator | Saturday 11 April 2026 01:00:16 +0000 (0:00:00.341) 0:00:00.341 ******** 2026-04-11 01:01:25.261429 | orchestrator | ok: [testbed-manager] 2026-04-11 01:01:25.261433 | orchestrator | ok: [testbed-node-0] 2026-04-11 01:01:25.261437 | orchestrator | ok: [testbed-node-1] 2026-04-11 01:01:25.261468 | orchestrator | ok: [testbed-node-2] 2026-04-11 01:01:25.261474 | orchestrator | ok: [testbed-node-3] 2026-04-11 01:01:25.261478 | orchestrator | ok: [testbed-node-4] 2026-04-11 01:01:25.261482 | orchestrator | ok: [testbed-node-5] 2026-04-11 01:01:25.261486 | orchestrator | 2026-04-11 01:01:25.261490 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 01:01:25.261494 | orchestrator | Saturday 11 April 2026 01:00:16 +0000 (0:00:00.891) 0:00:01.232 ******** 2026-04-11 01:01:25.261499 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-11 01:01:25.261503 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-11 01:01:25.261508 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-11 01:01:25.261512 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-11 01:01:25.261516 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-11 01:01:25.261520 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-11 01:01:25.261524 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-11 01:01:25.261528 | orchestrator | 2026-04-11 01:01:25.261532 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-11 01:01:25.261570 | orchestrator | 2026-04-11 01:01:25.261577 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-11 01:01:25.261584 | orchestrator | Saturday 11 April 2026 01:00:17 +0000 (0:00:00.866) 0:00:02.099 ******** 2026-04-11 01:01:25.261591 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 01:01:25.261598 | orchestrator | 2026-04-11 01:01:25.261605 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-11 01:01:25.261611 | orchestrator | Saturday 11 April 2026 01:00:19 +0000 (0:00:01.309) 0:00:03.409 ******** 2026-04-11 01:01:25.261629 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 01:01:25.261649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.261654 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.261708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.261715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.261726 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.261743 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:25.261790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.261807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.261811 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.261905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.261926 | orchestrator | 2026-04-11 01:01:25.261932 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-11 01:01:25.261938 | orchestrator | Saturday 11 April 2026 01:00:23 +0000 (0:00:03.905) 0:00:07.314 ******** 2026-04-11 01:01:25.262160 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 01:01:25.262181 | orchestrator | 2026-04-11 01:01:25.262188 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-11 01:01:25.262195 | orchestrator | Saturday 11 April 2026 01:00:24 +0000 (0:00:01.316) 0:00:08.631 ******** 2026-04-11 01:01:25.262235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.262241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.262250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.262255 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 01:01:25.262260 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.262279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.262283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.262287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262445 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.262449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262527 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.262578 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:25.262585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262613 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.262646 | orchestrator | 2026-04-11 01:01:25.262651 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-11 01:01:25.262655 | orchestrator | Saturday 11 April 2026 01:00:29 +0000 (0:00:05.001) 0:00:13.632 ******** 2026-04-11 01:01:25.262664 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 01:01:25.262786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.262791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.262799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.262806 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.262814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.262825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.262831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.262854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.262862 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.262869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.262942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.262957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.262964 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:25.262972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263199 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.263207 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263214 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.263221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263228 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.263291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263314 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.263320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263337 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.263344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263365 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.263371 | orchestrator | 2026-04-11 01:01:25.263382 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-11 01:01:25.263389 | orchestrator | Saturday 11 April 2026 01:00:31 +0000 (0:00:02.106) 0:00:15.739 ******** 2026-04-11 01:01:25.263413 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 01:01:25.263421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263516 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.263523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263537 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263592 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.263598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.263612 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:25.263637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263656 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.263661 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.263665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263668 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.263672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263680 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.263685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.263708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.263723 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.263730 | orchestrator | 2026-04-11 01:01:25.263737 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-11 01:01:25.263741 | orchestrator | Saturday 11 April 2026 01:00:34 +0000 (0:00:02.556) 0:00:18.296 ******** 2026-04-11 01:01:25.263745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.263753 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 01:01:25.263758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.263765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.263770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.263787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.263791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.263795 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.263801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263835 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263912 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:25.263924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.263943 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.263974 | orchestrator | 2026-04-11 01:01:25.263979 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-11 01:01:25.263983 | orchestrator | Saturday 11 April 2026 01:00:39 +0000 (0:00:05.883) 0:00:24.179 ******** 2026-04-11 01:01:25.263987 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 01:01:25.263992 | orchestrator | 2026-04-11 01:01:25.264045 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-11 01:01:25.264051 | orchestrator | Saturday 11 April 2026 01:00:40 +0000 (0:00:00.912) 0:00:25.092 ******** 2026-04-11 01:01:25.264057 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.264071 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.264078 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.264085 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.264092 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.264099 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.264108 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.264114 | orchestrator | 2026-04-11 01:01:25.264119 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-11 01:01:25.264124 | orchestrator | Saturday 11 April 2026 01:00:41 +0000 (0:00:00.767) 0:00:25.860 ******** 2026-04-11 01:01:25.264128 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 01:01:25.264133 | orchestrator | 2026-04-11 01:01:25.264137 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-11 01:01:25.264142 | orchestrator | Saturday 11 April 2026 01:00:42 +0000 (0:00:00.745) 0:00:26.605 ******** 2026-04-11 01:01:25.264147 | orchestrator | [WARNING]: Skipped 2026-04-11 01:01:25.264152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264157 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-11 01:01:25.264162 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264166 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-11 01:01:25.264171 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 01:01:25.264175 | orchestrator | [WARNING]: Skipped 2026-04-11 01:01:25.264180 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264184 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-11 01:01:25.264188 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264193 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-11 01:01:25.264198 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 01:01:25.264202 | orchestrator | [WARNING]: Skipped 2026-04-11 01:01:25.264206 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264210 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-11 01:01:25.264214 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264219 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-11 01:01:25.264223 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 01:01:25.264228 | orchestrator | [WARNING]: Skipped 2026-04-11 01:01:25.264232 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264237 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-11 01:01:25.264241 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264246 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-11 01:01:25.264250 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 01:01:25.264254 | orchestrator | [WARNING]: Skipped 2026-04-11 01:01:25.264259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264263 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-11 01:01:25.264267 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264272 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-11 01:01:25.264276 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 01:01:25.264297 | orchestrator | [WARNING]: Skipped 2026-04-11 01:01:25.264302 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264306 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-11 01:01:25.264311 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264319 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-11 01:01:25.264323 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 01:01:25.264327 | orchestrator | [WARNING]: Skipped 2026-04-11 01:01:25.264377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264381 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-11 01:01:25.264385 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 01:01:25.264389 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-11 01:01:25.264393 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 01:01:25.264397 | orchestrator | 2026-04-11 01:01:25.264400 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-11 01:01:25.264404 | orchestrator | Saturday 11 April 2026 01:00:43 +0000 (0:00:01.601) 0:00:28.207 ******** 2026-04-11 01:01:25.264408 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 01:01:25.264413 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.264417 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 01:01:25.264421 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 01:01:25.264424 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.264428 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.264432 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 01:01:25.264436 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.264440 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 01:01:25.264444 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.264448 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 01:01:25.264452 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.264455 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-11 01:01:25.264459 | orchestrator | 2026-04-11 01:01:25.264465 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-11 01:01:25.264469 | orchestrator | Saturday 11 April 2026 01:00:56 +0000 (0:00:12.741) 0:00:40.948 ******** 2026-04-11 01:01:25.264473 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 01:01:25.264477 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.264480 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 01:01:25.264484 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.264488 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 01:01:25.264492 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.264496 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 01:01:25.264500 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.264504 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 01:01:25.264507 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.264511 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 01:01:25.264515 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.264519 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-11 01:01:25.264523 | orchestrator | 2026-04-11 01:01:25.264526 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-11 01:01:25.264530 | orchestrator | Saturday 11 April 2026 01:00:59 +0000 (0:00:02.966) 0:00:43.915 ******** 2026-04-11 01:01:25.264537 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 01:01:25.264542 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.264545 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 01:01:25.264549 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.264554 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 01:01:25.264560 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.264567 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 01:01:25.264573 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.264583 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 01:01:25.264590 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.264616 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 01:01:25.264623 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.264629 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-11 01:01:25.264636 | orchestrator | 2026-04-11 01:01:25.264643 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-11 01:01:25.264647 | orchestrator | Saturday 11 April 2026 01:01:01 +0000 (0:00:01.492) 0:00:45.407 ******** 2026-04-11 01:01:25.264651 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 01:01:25.264655 | orchestrator | 2026-04-11 01:01:25.264658 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-11 01:01:25.264662 | orchestrator | Saturday 11 April 2026 01:01:01 +0000 (0:00:00.707) 0:00:46.115 ******** 2026-04-11 01:01:25.264666 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.264670 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.264673 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.264677 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.264683 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.264692 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.264700 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.264706 | orchestrator | 2026-04-11 01:01:25.264712 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-11 01:01:25.264718 | orchestrator | Saturday 11 April 2026 01:01:02 +0000 (0:00:00.683) 0:00:46.798 ******** 2026-04-11 01:01:25.264725 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.264731 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.264737 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.264744 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.264747 | orchestrator | changed: [testbed-node-0] 2026-04-11 01:01:25.264751 | orchestrator | changed: [testbed-node-1] 2026-04-11 01:01:25.264755 | orchestrator | changed: [testbed-node-2] 2026-04-11 01:01:25.264759 | orchestrator | 2026-04-11 01:01:25.264764 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-11 01:01:25.264770 | orchestrator | Saturday 11 April 2026 01:01:04 +0000 (0:00:01.828) 0:00:48.627 ******** 2026-04-11 01:01:25.264776 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 01:01:25.264781 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.264844 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 01:01:25.264857 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.264870 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 01:01:25.264881 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.264888 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 01:01:25.264895 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.264901 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 01:01:25.264908 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.264915 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 01:01:25.264922 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.264927 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 01:01:25.264932 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.264938 | orchestrator | 2026-04-11 01:01:25.264943 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-11 01:01:25.264952 | orchestrator | Saturday 11 April 2026 01:01:05 +0000 (0:00:01.133) 0:00:49.760 ******** 2026-04-11 01:01:25.264960 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 01:01:25.264966 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.264972 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 01:01:25.264979 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.264985 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 01:01:25.264991 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.265025 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 01:01:25.265032 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.265039 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 01:01:25.265045 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.265051 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 01:01:25.265057 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.265063 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-11 01:01:25.265069 | orchestrator | 2026-04-11 01:01:25.265075 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-11 01:01:25.265082 | orchestrator | Saturday 11 April 2026 01:01:07 +0000 (0:00:01.561) 0:00:51.322 ******** 2026-04-11 01:01:25.265089 | orchestrator | [WARNING]: Skipped 2026-04-11 01:01:25.265095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-11 01:01:25.265101 | orchestrator | due to this access issue: 2026-04-11 01:01:25.265107 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-11 01:01:25.265121 | orchestrator | not a directory 2026-04-11 01:01:25.265128 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 01:01:25.265134 | orchestrator | 2026-04-11 01:01:25.265141 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-11 01:01:25.265147 | orchestrator | Saturday 11 April 2026 01:01:08 +0000 (0:00:01.157) 0:00:52.479 ******** 2026-04-11 01:01:25.265154 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.265160 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.265174 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.265186 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.265193 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.265200 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.265206 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.265212 | orchestrator | 2026-04-11 01:01:25.265225 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-11 01:01:25.265232 | orchestrator | Saturday 11 April 2026 01:01:08 +0000 (0:00:00.678) 0:00:53.158 ******** 2026-04-11 01:01:25.265238 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.265245 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.265251 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.265258 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.265265 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.265271 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.265277 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.265284 | orchestrator | 2026-04-11 01:01:25.265290 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-11 01:01:25.265297 | orchestrator | Saturday 11 April 2026 01:01:09 +0000 (0:00:00.806) 0:00:53.965 ******** 2026-04-11 01:01:25.265305 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 01:01:25.265335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.265345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.265352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.265365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.265377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.265384 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.265390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 01:01:25.265399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265439 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265482 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:25.265493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 01:01:25.265531 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 01:01:25.265570 | orchestrator | 2026-04-11 01:01:25.265576 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-11 01:01:25.265584 | orchestrator | Saturday 11 April 2026 01:01:13 +0000 (0:00:03.769) 0:00:57.735 ******** 2026-04-11 01:01:25.265591 | orchestrator | changed: [testbed-manager] => { 2026-04-11 01:01:25.265598 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:25.265605 | orchestrator | } 2026-04-11 01:01:25.265612 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 01:01:25.265619 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:25.265630 | orchestrator | } 2026-04-11 01:01:25.265638 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 01:01:25.265644 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:25.265651 | orchestrator | } 2026-04-11 01:01:25.265658 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 01:01:25.265664 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:25.265671 | orchestrator | } 2026-04-11 01:01:25.265678 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 01:01:25.265684 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:25.265690 | orchestrator | } 2026-04-11 01:01:25.265696 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 01:01:25.265703 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:25.265710 | orchestrator | } 2026-04-11 01:01:25.265717 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 01:01:25.265723 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:25.265730 | orchestrator | } 2026-04-11 01:01:25.265737 | orchestrator | 2026-04-11 01:01:25.265743 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 01:01:25.265750 | orchestrator | Saturday 11 April 2026 01:01:14 +0000 (0:00:00.801) 0:00:58.536 ******** 2026-04-11 01:01:25.265758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 01:01:25.265774 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.265781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.265788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:25.265798 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.265815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.265841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.265848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.265882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265889 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.265900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.265907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.265930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 01:01:25.265941 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:25.265947 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:25.265953 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:25.265960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.265967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.265973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.265980 | orchestrator | skipping: [testbed-node-3] 2026-04-11 01:01:25.265990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.266067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.266075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.266081 | orchestrator | skipping: [testbed-node-4] 2026-04-11 01:01:25.266092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 01:01:25.266105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.266112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 01:01:25.266118 | orchestrator | skipping: [testbed-node-5] 2026-04-11 01:01:25.266125 | orchestrator | 2026-04-11 01:01:25.266131 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-11 01:01:25.266138 | orchestrator | Saturday 11 April 2026 01:01:15 +0000 (0:00:01.687) 0:01:00.224 ******** 2026-04-11 01:01:25.266145 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-11 01:01:25.266152 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:01:25.266158 | orchestrator | 2026-04-11 01:01:25.266165 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 01:01:25.266171 | orchestrator | Saturday 11 April 2026 01:01:16 +0000 (0:00:01.012) 0:01:01.237 ******** 2026-04-11 01:01:25.266178 | orchestrator | 2026-04-11 01:01:25.266185 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 01:01:25.266192 | orchestrator | Saturday 11 April 2026 01:01:17 +0000 (0:00:00.063) 0:01:01.300 ******** 2026-04-11 01:01:25.266198 | orchestrator | 2026-04-11 01:01:25.266204 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 01:01:25.266210 | orchestrator | Saturday 11 April 2026 01:01:17 +0000 (0:00:00.181) 0:01:01.482 ******** 2026-04-11 01:01:25.266216 | orchestrator | 2026-04-11 01:01:25.266223 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 01:01:25.266234 | orchestrator | Saturday 11 April 2026 01:01:17 +0000 (0:00:00.057) 0:01:01.540 ******** 2026-04-11 01:01:25.266241 | orchestrator | 2026-04-11 01:01:25.266247 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 01:01:25.266254 | orchestrator | Saturday 11 April 2026 01:01:17 +0000 (0:00:00.058) 0:01:01.598 ******** 2026-04-11 01:01:25.266260 | orchestrator | 2026-04-11 01:01:25.266266 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 01:01:25.266271 | orchestrator | Saturday 11 April 2026 01:01:17 +0000 (0:00:00.055) 0:01:01.654 ******** 2026-04-11 01:01:25.266278 | orchestrator | 2026-04-11 01:01:25.266284 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 01:01:25.266290 | orchestrator | Saturday 11 April 2026 01:01:17 +0000 (0:00:00.057) 0:01:01.711 ******** 2026-04-11 01:01:25.266297 | orchestrator | 2026-04-11 01:01:25.266304 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-11 01:01:25.266307 | orchestrator | Saturday 11 April 2026 01:01:17 +0000 (0:00:00.080) 0:01:01.792 ******** 2026-04-11 01:01:25.266316 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_mvwr_b0d/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_mvwr_b0d/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_mvwr_b0d/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_mvwr_b0d/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-server not found\")\\n'"} 2026-04-11 01:01:25.266325 | orchestrator | 2026-04-11 01:01:25.266329 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-11 01:01:25.266333 | orchestrator | Saturday 11 April 2026 01:01:19 +0000 (0:00:02.181) 0:01:03.974 ******** 2026-04-11 01:01:25.266341 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ivxgl3vv/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ivxgl3vv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ivxgl3vv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ivxgl3vv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-11 01:01:25.266354 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_e8ouew68/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_e8ouew68/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_e8ouew68/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_e8ouew68/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-11 01:01:25.266363 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_a66q5mgy/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_a66q5mgy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_a66q5mgy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_a66q5mgy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-11 01:01:25.266381 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6b7inid6/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6b7inid6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_6b7inid6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6b7inid6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-11 01:01:25.266390 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ywinp6lh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ywinp6lh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ywinp6lh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ywinp6lh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-11 01:01:25.266401 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_bs7q2403/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_bs7q2403/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_bs7q2403/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_bs7q2403/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-11 01:01:25.266408 | orchestrator | 2026-04-11 01:01:25.266412 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:01:25.266416 | orchestrator | testbed-manager : ok=18  changed=9  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-04-11 01:01:25.266422 | orchestrator | testbed-node-0 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-11 01:01:25.266426 | orchestrator | testbed-node-1 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-11 01:01:25.266429 | orchestrator | testbed-node-2 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-11 01:01:25.266433 | orchestrator | testbed-node-3 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-11 01:01:25.266437 | orchestrator | testbed-node-4 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-11 01:01:25.266441 | orchestrator | testbed-node-5 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-11 01:01:25.266444 | orchestrator | 2026-04-11 01:01:25.266448 | orchestrator | 2026-04-11 01:01:25.266452 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 01:01:25.266456 | orchestrator | Saturday 11 April 2026 01:01:23 +0000 (0:00:03.871) 0:01:07.845 ******** 2026-04-11 01:01:25.266460 | orchestrator | =============================================================================== 2026-04-11 01:01:25.266463 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.74s 2026-04-11 01:01:25.266467 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.88s 2026-04-11 01:01:25.266471 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.00s 2026-04-11 01:01:25.266480 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.91s 2026-04-11 01:01:25.266484 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 3.87s 2026-04-11 01:01:25.266487 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 3.77s 2026-04-11 01:01:25.266491 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.97s 2026-04-11 01:01:25.266495 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.56s 2026-04-11 01:01:25.266499 | orchestrator | prometheus : Restart prometheus-server container ------------------------ 2.18s 2026-04-11 01:01:25.266502 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.11s 2026-04-11 01:01:25.266506 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.83s 2026-04-11 01:01:25.266510 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.69s 2026-04-11 01:01:25.266514 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.60s 2026-04-11 01:01:25.266517 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.56s 2026-04-11 01:01:25.266521 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.49s 2026-04-11 01:01:25.266525 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.32s 2026-04-11 01:01:25.266529 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.31s 2026-04-11 01:01:25.266532 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 1.16s 2026-04-11 01:01:25.266536 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.13s 2026-04-11 01:01:25.266540 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 1.01s 2026-04-11 01:01:25.266544 | orchestrator | 2026-04-11 01:01:25 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:25.266548 | orchestrator | 2026-04-11 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:28.315345 | orchestrator | 2026-04-11 01:01:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:28.317679 | orchestrator | 2026-04-11 01:01:28 | INFO  | Task 4bd5c3ec-0bef-4f8a-9b61-a04544602b06 is in state STARTED 2026-04-11 01:01:28.319817 | orchestrator | 2026-04-11 01:01:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:28.321923 | orchestrator | 2026-04-11 01:01:28 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:28.321956 | orchestrator | 2026-04-11 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:31.355952 | orchestrator | 2026-04-11 01:01:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:31.357049 | orchestrator | 2026-04-11 01:01:31 | INFO  | Task 4bd5c3ec-0bef-4f8a-9b61-a04544602b06 is in state STARTED 2026-04-11 01:01:31.358764 | orchestrator | 2026-04-11 01:01:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:31.359488 | orchestrator | 2026-04-11 01:01:31 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:31.359518 | orchestrator | 2026-04-11 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:34.405552 | orchestrator | 2026-04-11 01:01:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:34.409082 | orchestrator | 2026-04-11 01:01:34 | INFO  | Task 4bd5c3ec-0bef-4f8a-9b61-a04544602b06 is in state STARTED 2026-04-11 01:01:34.410449 | orchestrator | 2026-04-11 01:01:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:34.413202 | orchestrator | 2026-04-11 01:01:34 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:34.413283 | orchestrator | 2026-04-11 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:37.456305 | orchestrator | 2026-04-11 01:01:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:37.457889 | orchestrator | 2026-04-11 01:01:37 | INFO  | Task 4bd5c3ec-0bef-4f8a-9b61-a04544602b06 is in state STARTED 2026-04-11 01:01:37.461120 | orchestrator | 2026-04-11 01:01:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:37.463482 | orchestrator | 2026-04-11 01:01:37 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:37.463548 | orchestrator | 2026-04-11 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:40.499194 | orchestrator | 2026-04-11 01:01:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:40.500127 | orchestrator | 2026-04-11 01:01:40 | INFO  | Task 4bd5c3ec-0bef-4f8a-9b61-a04544602b06 is in state STARTED 2026-04-11 01:01:40.502167 | orchestrator | 2026-04-11 01:01:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:40.503436 | orchestrator | 2026-04-11 01:01:40 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:40.503499 | orchestrator | 2026-04-11 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:43.540969 | orchestrator | 2026-04-11 01:01:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:43.542745 | orchestrator | 2026-04-11 01:01:43 | INFO  | Task 4bd5c3ec-0bef-4f8a-9b61-a04544602b06 is in state STARTED 2026-04-11 01:01:43.544658 | orchestrator | 2026-04-11 01:01:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:43.546669 | orchestrator | 2026-04-11 01:01:43 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:43.546704 | orchestrator | 2026-04-11 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:46.592748 | orchestrator | 2026-04-11 01:01:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:46.594745 | orchestrator | 2026-04-11 01:01:46 | INFO  | Task 4bd5c3ec-0bef-4f8a-9b61-a04544602b06 is in state SUCCESS 2026-04-11 01:01:46.597450 | orchestrator | 2026-04-11 01:01:46.597529 | orchestrator | 2026-04-11 01:01:46.597536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 01:01:46.597541 | orchestrator | 2026-04-11 01:01:46.597546 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 01:01:46.597550 | orchestrator | Saturday 11 April 2026 01:01:26 +0000 (0:00:00.264) 0:00:00.264 ******** 2026-04-11 01:01:46.597554 | orchestrator | ok: [testbed-node-0] 2026-04-11 01:01:46.597560 | orchestrator | ok: [testbed-node-1] 2026-04-11 01:01:46.597564 | orchestrator | ok: [testbed-node-2] 2026-04-11 01:01:46.597568 | orchestrator | 2026-04-11 01:01:46.597573 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 01:01:46.597577 | orchestrator | Saturday 11 April 2026 01:01:26 +0000 (0:00:00.238) 0:00:00.503 ******** 2026-04-11 01:01:46.597581 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-11 01:01:46.597585 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-11 01:01:46.597589 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-11 01:01:46.597593 | orchestrator | 2026-04-11 01:01:46.597609 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-11 01:01:46.597613 | orchestrator | 2026-04-11 01:01:46.597617 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-11 01:01:46.597639 | orchestrator | Saturday 11 April 2026 01:01:27 +0000 (0:00:00.254) 0:00:00.757 ******** 2026-04-11 01:01:46.597644 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 01:01:46.597649 | orchestrator | 2026-04-11 01:01:46.597653 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-11 01:01:46.597656 | orchestrator | Saturday 11 April 2026 01:01:27 +0000 (0:00:00.534) 0:00:01.292 ******** 2026-04-11 01:01:46.597663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597678 | orchestrator | 2026-04-11 01:01:46.597682 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-11 01:01:46.597686 | orchestrator | Saturday 11 April 2026 01:01:28 +0000 (0:00:00.895) 0:00:02.187 ******** 2026-04-11 01:01:46.597690 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 01:01:46.597694 | orchestrator | 2026-04-11 01:01:46.597698 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-11 01:01:46.597701 | orchestrator | Saturday 11 April 2026 01:01:29 +0000 (0:00:00.787) 0:00:02.975 ******** 2026-04-11 01:01:46.597705 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 01:01:46.597710 | orchestrator | 2026-04-11 01:01:46.597724 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-11 01:01:46.597728 | orchestrator | Saturday 11 April 2026 01:01:29 +0000 (0:00:00.449) 0:00:03.424 ******** 2026-04-11 01:01:46.597735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597751 | orchestrator | 2026-04-11 01:01:46.597755 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-11 01:01:46.597759 | orchestrator | Saturday 11 April 2026 01:01:31 +0000 (0:00:01.221) 0:00:04.645 ******** 2026-04-11 01:01:46.597763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.597767 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:46.597775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.597785 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:46.597795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.597805 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:46.597813 | orchestrator | 2026-04-11 01:01:46.597819 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-11 01:01:46.597825 | orchestrator | Saturday 11 April 2026 01:01:31 +0000 (0:00:00.408) 0:00:05.054 ******** 2026-04-11 01:01:46.597831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.597837 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:46.597843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.597849 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:46.597855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.597861 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:46.597872 | orchestrator | 2026-04-11 01:01:46.597879 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-11 01:01:46.597884 | orchestrator | Saturday 11 April 2026 01:01:32 +0000 (0:00:00.551) 0:00:05.605 ******** 2026-04-11 01:01:46.597904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597926 | orchestrator | 2026-04-11 01:01:46.597931 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-11 01:01:46.597954 | orchestrator | Saturday 11 April 2026 01:01:33 +0000 (0:00:01.210) 0:00:06.816 ******** 2026-04-11 01:01:46.597960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.597991 | orchestrator | 2026-04-11 01:01:46.597997 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-11 01:01:46.598003 | orchestrator | Saturday 11 April 2026 01:01:34 +0000 (0:00:01.622) 0:00:08.438 ******** 2026-04-11 01:01:46.598009 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:46.598076 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:46.598083 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:46.598089 | orchestrator | 2026-04-11 01:01:46.598100 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-11 01:01:46.598104 | orchestrator | Saturday 11 April 2026 01:01:35 +0000 (0:00:00.262) 0:00:08.700 ******** 2026-04-11 01:01:46.598108 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 01:01:46.598112 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 01:01:46.598116 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 01:01:46.598119 | orchestrator | 2026-04-11 01:01:46.598123 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-11 01:01:46.598127 | orchestrator | Saturday 11 April 2026 01:01:36 +0000 (0:00:01.271) 0:00:09.973 ******** 2026-04-11 01:01:46.598131 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 01:01:46.598135 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 01:01:46.598139 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 01:01:46.598143 | orchestrator | 2026-04-11 01:01:46.598149 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-11 01:01:46.598154 | orchestrator | Saturday 11 April 2026 01:01:37 +0000 (0:00:01.244) 0:00:11.217 ******** 2026-04-11 01:01:46.598160 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 01:01:46.598165 | orchestrator | 2026-04-11 01:01:46.598279 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-11 01:01:46.598285 | orchestrator | Saturday 11 April 2026 01:01:38 +0000 (0:00:00.759) 0:00:11.976 ******** 2026-04-11 01:01:46.598289 | orchestrator | ok: [testbed-node-0] 2026-04-11 01:01:46.598294 | orchestrator | ok: [testbed-node-1] 2026-04-11 01:01:46.598297 | orchestrator | ok: [testbed-node-2] 2026-04-11 01:01:46.598301 | orchestrator | 2026-04-11 01:01:46.598305 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-11 01:01:46.598309 | orchestrator | Saturday 11 April 2026 01:01:39 +0000 (0:00:00.888) 0:00:12.865 ******** 2026-04-11 01:01:46.598312 | orchestrator | changed: [testbed-node-0] 2026-04-11 01:01:46.598316 | orchestrator | changed: [testbed-node-1] 2026-04-11 01:01:46.598320 | orchestrator | changed: [testbed-node-2] 2026-04-11 01:01:46.598324 | orchestrator | 2026-04-11 01:01:46.598327 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-11 01:01:46.598338 | orchestrator | Saturday 11 April 2026 01:01:40 +0000 (0:00:01.298) 0:00:14.163 ******** 2026-04-11 01:01:46.598343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.598347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.598369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 01:01:46.598374 | orchestrator | 2026-04-11 01:01:46.598378 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-11 01:01:46.598382 | orchestrator | Saturday 11 April 2026 01:01:41 +0000 (0:00:01.150) 0:00:15.314 ******** 2026-04-11 01:01:46.598386 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 01:01:46.598390 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:46.598394 | orchestrator | } 2026-04-11 01:01:46.598398 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 01:01:46.598401 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:46.598405 | orchestrator | } 2026-04-11 01:01:46.598409 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 01:01:46.598413 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 01:01:46.598417 | orchestrator | } 2026-04-11 01:01:46.598420 | orchestrator | 2026-04-11 01:01:46.598424 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 01:01:46.598428 | orchestrator | Saturday 11 April 2026 01:01:42 +0000 (0:00:00.325) 0:00:15.639 ******** 2026-04-11 01:01:46.598432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.598443 | orchestrator | skipping: [testbed-node-0] 2026-04-11 01:01:46.598451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.598459 | orchestrator | skipping: [testbed-node-1] 2026-04-11 01:01:46.598466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 01:01:46.598472 | orchestrator | skipping: [testbed-node-2] 2026-04-11 01:01:46.598477 | orchestrator | 2026-04-11 01:01:46.598483 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-11 01:01:46.598489 | orchestrator | Saturday 11 April 2026 01:01:42 +0000 (0:00:00.731) 0:00:16.370 ******** 2026-04-11 01:01:46.598498 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-11 01:01:46.598505 | orchestrator | 2026-04-11 01:01:46.598511 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:01:46.598518 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-11 01:01:46.598525 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 01:01:46.598532 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 01:01:46.598537 | orchestrator | 2026-04-11 01:01:46.598546 | orchestrator | 2026-04-11 01:01:46.598554 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 01:01:46.598566 | orchestrator | Saturday 11 April 2026 01:01:43 +0000 (0:00:00.739) 0:00:17.110 ******** 2026-04-11 01:01:46.598571 | orchestrator | =============================================================================== 2026-04-11 01:01:46.598577 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.62s 2026-04-11 01:01:46.598582 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.30s 2026-04-11 01:01:46.598588 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2026-04-11 01:01:46.598594 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.24s 2026-04-11 01:01:46.598605 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.22s 2026-04-11 01:01:46.598610 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2026-04-11 01:01:46.598615 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.15s 2026-04-11 01:01:46.598621 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.90s 2026-04-11 01:01:46.598626 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.89s 2026-04-11 01:01:46.598633 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.79s 2026-04-11 01:01:46.598639 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.76s 2026-04-11 01:01:46.598644 | orchestrator | grafana : Creating grafana database ------------------------------------- 0.74s 2026-04-11 01:01:46.598649 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.73s 2026-04-11 01:01:46.598654 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.55s 2026-04-11 01:01:46.598660 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.53s 2026-04-11 01:01:46.598666 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.45s 2026-04-11 01:01:46.598675 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.41s 2026-04-11 01:01:46.598684 | orchestrator | service-check-containers : grafana | Notify handlers to restart containers --- 0.33s 2026-04-11 01:01:46.598689 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.26s 2026-04-11 01:01:46.598695 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.25s 2026-04-11 01:01:46.598701 | orchestrator | 2026-04-11 01:01:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:46.603716 | orchestrator | 2026-04-11 01:01:46 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:46.603805 | orchestrator | 2026-04-11 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:49.644636 | orchestrator | 2026-04-11 01:01:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:49.647307 | orchestrator | 2026-04-11 01:01:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:49.649362 | orchestrator | 2026-04-11 01:01:49 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state STARTED 2026-04-11 01:01:49.649425 | orchestrator | 2026-04-11 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:52.699235 | orchestrator | 2026-04-11 01:01:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:52.700970 | orchestrator | 2026-04-11 01:01:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:52.702507 | orchestrator | 2026-04-11 01:01:52 | INFO  | Task 0fb1dd1e-fcf7-4b40-be23-684774e48b4b is in state SUCCESS 2026-04-11 01:01:52.702543 | orchestrator | 2026-04-11 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:55.740414 | orchestrator | 2026-04-11 01:01:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:55.741490 | orchestrator | 2026-04-11 01:01:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:55.741550 | orchestrator | 2026-04-11 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:01:58.784298 | orchestrator | 2026-04-11 01:01:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:01:58.785861 | orchestrator | 2026-04-11 01:01:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:01:58.785938 | orchestrator | 2026-04-11 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:01.827916 | orchestrator | 2026-04-11 01:02:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:01.829115 | orchestrator | 2026-04-11 01:02:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:01.829281 | orchestrator | 2026-04-11 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:04.872347 | orchestrator | 2026-04-11 01:02:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:04.875335 | orchestrator | 2026-04-11 01:02:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:04.875404 | orchestrator | 2026-04-11 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:07.915179 | orchestrator | 2026-04-11 01:02:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:07.916909 | orchestrator | 2026-04-11 01:02:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:07.916970 | orchestrator | 2026-04-11 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:10.965542 | orchestrator | 2026-04-11 01:02:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:10.967917 | orchestrator | 2026-04-11 01:02:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:10.967979 | orchestrator | 2026-04-11 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:14.008882 | orchestrator | 2026-04-11 01:02:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:14.010463 | orchestrator | 2026-04-11 01:02:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:14.010530 | orchestrator | 2026-04-11 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:17.051258 | orchestrator | 2026-04-11 01:02:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:17.051707 | orchestrator | 2026-04-11 01:02:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:17.051753 | orchestrator | 2026-04-11 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:20.093552 | orchestrator | 2026-04-11 01:02:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:20.095300 | orchestrator | 2026-04-11 01:02:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:20.095388 | orchestrator | 2026-04-11 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:23.136276 | orchestrator | 2026-04-11 01:02:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:23.137604 | orchestrator | 2026-04-11 01:02:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:23.137633 | orchestrator | 2026-04-11 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:26.176663 | orchestrator | 2026-04-11 01:02:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:26.177267 | orchestrator | 2026-04-11 01:02:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:26.177302 | orchestrator | 2026-04-11 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:29.219795 | orchestrator | 2026-04-11 01:02:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:29.221762 | orchestrator | 2026-04-11 01:02:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:29.221845 | orchestrator | 2026-04-11 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:32.262225 | orchestrator | 2026-04-11 01:02:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:32.264572 | orchestrator | 2026-04-11 01:02:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:32.264608 | orchestrator | 2026-04-11 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:35.310867 | orchestrator | 2026-04-11 01:02:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:35.312019 | orchestrator | 2026-04-11 01:02:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:35.312095 | orchestrator | 2026-04-11 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:38.350275 | orchestrator | 2026-04-11 01:02:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:38.351690 | orchestrator | 2026-04-11 01:02:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:38.351794 | orchestrator | 2026-04-11 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:41.395113 | orchestrator | 2026-04-11 01:02:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:41.397544 | orchestrator | 2026-04-11 01:02:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:41.397610 | orchestrator | 2026-04-11 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:44.440437 | orchestrator | 2026-04-11 01:02:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:44.444499 | orchestrator | 2026-04-11 01:02:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:44.444638 | orchestrator | 2026-04-11 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:47.484708 | orchestrator | 2026-04-11 01:02:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:47.487434 | orchestrator | 2026-04-11 01:02:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:47.487482 | orchestrator | 2026-04-11 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:50.529368 | orchestrator | 2026-04-11 01:02:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:50.530650 | orchestrator | 2026-04-11 01:02:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:50.530699 | orchestrator | 2026-04-11 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:53.571622 | orchestrator | 2026-04-11 01:02:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:53.573534 | orchestrator | 2026-04-11 01:02:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:53.573617 | orchestrator | 2026-04-11 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:56.619583 | orchestrator | 2026-04-11 01:02:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:56.620758 | orchestrator | 2026-04-11 01:02:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:56.620962 | orchestrator | 2026-04-11 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:02:59.662552 | orchestrator | 2026-04-11 01:02:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:02:59.664583 | orchestrator | 2026-04-11 01:02:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:02:59.664644 | orchestrator | 2026-04-11 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:02.702579 | orchestrator | 2026-04-11 01:03:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:02.704668 | orchestrator | 2026-04-11 01:03:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:02.704755 | orchestrator | 2026-04-11 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:05.746802 | orchestrator | 2026-04-11 01:03:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:05.747472 | orchestrator | 2026-04-11 01:03:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:05.747502 | orchestrator | 2026-04-11 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:08.792097 | orchestrator | 2026-04-11 01:03:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:08.794079 | orchestrator | 2026-04-11 01:03:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:08.794208 | orchestrator | 2026-04-11 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:11.836365 | orchestrator | 2026-04-11 01:03:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:11.837776 | orchestrator | 2026-04-11 01:03:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:11.837820 | orchestrator | 2026-04-11 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:14.880764 | orchestrator | 2026-04-11 01:03:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:14.882575 | orchestrator | 2026-04-11 01:03:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:14.882655 | orchestrator | 2026-04-11 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:17.931348 | orchestrator | 2026-04-11 01:03:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:17.933302 | orchestrator | 2026-04-11 01:03:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:17.933381 | orchestrator | 2026-04-11 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:20.974094 | orchestrator | 2026-04-11 01:03:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:20.976379 | orchestrator | 2026-04-11 01:03:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:20.976436 | orchestrator | 2026-04-11 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:24.022230 | orchestrator | 2026-04-11 01:03:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:24.023703 | orchestrator | 2026-04-11 01:03:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:24.023743 | orchestrator | 2026-04-11 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:27.065787 | orchestrator | 2026-04-11 01:03:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:27.067347 | orchestrator | 2026-04-11 01:03:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:27.067565 | orchestrator | 2026-04-11 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:30.108498 | orchestrator | 2026-04-11 01:03:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:30.109765 | orchestrator | 2026-04-11 01:03:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:30.109828 | orchestrator | 2026-04-11 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:33.151505 | orchestrator | 2026-04-11 01:03:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:33.153711 | orchestrator | 2026-04-11 01:03:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:33.153750 | orchestrator | 2026-04-11 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:36.199282 | orchestrator | 2026-04-11 01:03:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:36.201679 | orchestrator | 2026-04-11 01:03:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:36.201714 | orchestrator | 2026-04-11 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:39.239285 | orchestrator | 2026-04-11 01:03:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:39.241948 | orchestrator | 2026-04-11 01:03:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:39.242243 | orchestrator | 2026-04-11 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:42.288265 | orchestrator | 2026-04-11 01:03:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:42.290087 | orchestrator | 2026-04-11 01:03:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:42.290217 | orchestrator | 2026-04-11 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:45.333380 | orchestrator | 2026-04-11 01:03:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:45.335377 | orchestrator | 2026-04-11 01:03:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:45.335439 | orchestrator | 2026-04-11 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:48.375747 | orchestrator | 2026-04-11 01:03:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:48.378448 | orchestrator | 2026-04-11 01:03:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:48.378736 | orchestrator | 2026-04-11 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:51.415718 | orchestrator | 2026-04-11 01:03:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:51.417914 | orchestrator | 2026-04-11 01:03:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:51.417962 | orchestrator | 2026-04-11 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:54.458811 | orchestrator | 2026-04-11 01:03:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:54.462628 | orchestrator | 2026-04-11 01:03:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:54.462720 | orchestrator | 2026-04-11 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:03:57.506300 | orchestrator | 2026-04-11 01:03:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:03:57.508065 | orchestrator | 2026-04-11 01:03:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:03:57.508153 | orchestrator | 2026-04-11 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:00.553539 | orchestrator | 2026-04-11 01:04:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:00.555393 | orchestrator | 2026-04-11 01:04:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:00.555449 | orchestrator | 2026-04-11 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:03.595581 | orchestrator | 2026-04-11 01:04:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:03.596961 | orchestrator | 2026-04-11 01:04:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:03.597005 | orchestrator | 2026-04-11 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:06.642011 | orchestrator | 2026-04-11 01:04:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:06.644495 | orchestrator | 2026-04-11 01:04:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:06.644632 | orchestrator | 2026-04-11 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:09.687235 | orchestrator | 2026-04-11 01:04:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:09.688948 | orchestrator | 2026-04-11 01:04:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:09.688991 | orchestrator | 2026-04-11 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:12.730439 | orchestrator | 2026-04-11 01:04:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:12.732363 | orchestrator | 2026-04-11 01:04:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:12.732978 | orchestrator | 2026-04-11 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:15.773324 | orchestrator | 2026-04-11 01:04:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:15.774350 | orchestrator | 2026-04-11 01:04:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:15.774383 | orchestrator | 2026-04-11 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:18.818451 | orchestrator | 2026-04-11 01:04:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:18.820438 | orchestrator | 2026-04-11 01:04:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:18.820481 | orchestrator | 2026-04-11 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:21.865560 | orchestrator | 2026-04-11 01:04:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:21.867389 | orchestrator | 2026-04-11 01:04:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:21.867440 | orchestrator | 2026-04-11 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:24.908754 | orchestrator | 2026-04-11 01:04:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:24.911464 | orchestrator | 2026-04-11 01:04:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:24.911512 | orchestrator | 2026-04-11 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:27.957968 | orchestrator | 2026-04-11 01:04:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:27.960139 | orchestrator | 2026-04-11 01:04:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:27.960204 | orchestrator | 2026-04-11 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:31.009672 | orchestrator | 2026-04-11 01:04:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:31.009735 | orchestrator | 2026-04-11 01:04:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:31.009756 | orchestrator | 2026-04-11 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:34.051873 | orchestrator | 2026-04-11 01:04:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:34.054223 | orchestrator | 2026-04-11 01:04:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:34.054428 | orchestrator | 2026-04-11 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:37.097813 | orchestrator | 2026-04-11 01:04:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:37.099914 | orchestrator | 2026-04-11 01:04:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:37.099964 | orchestrator | 2026-04-11 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:40.142269 | orchestrator | 2026-04-11 01:04:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:40.143670 | orchestrator | 2026-04-11 01:04:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:40.143730 | orchestrator | 2026-04-11 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:43.187353 | orchestrator | 2026-04-11 01:04:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:43.188914 | orchestrator | 2026-04-11 01:04:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:43.188978 | orchestrator | 2026-04-11 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:46.232465 | orchestrator | 2026-04-11 01:04:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:46.232878 | orchestrator | 2026-04-11 01:04:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:46.233009 | orchestrator | 2026-04-11 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:49.269900 | orchestrator | 2026-04-11 01:04:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:49.271744 | orchestrator | 2026-04-11 01:04:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:49.271828 | orchestrator | 2026-04-11 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:52.314343 | orchestrator | 2026-04-11 01:04:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:52.316348 | orchestrator | 2026-04-11 01:04:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:52.316392 | orchestrator | 2026-04-11 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:55.359828 | orchestrator | 2026-04-11 01:04:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:55.361398 | orchestrator | 2026-04-11 01:04:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:55.361436 | orchestrator | 2026-04-11 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:04:58.404574 | orchestrator | 2026-04-11 01:04:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:04:58.406816 | orchestrator | 2026-04-11 01:04:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:04:58.406942 | orchestrator | 2026-04-11 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:01.455626 | orchestrator | 2026-04-11 01:05:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:01.457083 | orchestrator | 2026-04-11 01:05:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:01.457131 | orchestrator | 2026-04-11 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:04.501634 | orchestrator | 2026-04-11 01:05:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:04.502590 | orchestrator | 2026-04-11 01:05:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:04.502964 | orchestrator | 2026-04-11 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:07.549118 | orchestrator | 2026-04-11 01:05:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:07.550998 | orchestrator | 2026-04-11 01:05:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:07.551052 | orchestrator | 2026-04-11 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:10.593692 | orchestrator | 2026-04-11 01:05:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:10.594590 | orchestrator | 2026-04-11 01:05:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:10.594620 | orchestrator | 2026-04-11 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:13.640855 | orchestrator | 2026-04-11 01:05:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:13.643145 | orchestrator | 2026-04-11 01:05:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:13.643303 | orchestrator | 2026-04-11 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:16.679379 | orchestrator | 2026-04-11 01:05:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:16.681092 | orchestrator | 2026-04-11 01:05:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:16.681190 | orchestrator | 2026-04-11 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:19.724421 | orchestrator | 2026-04-11 01:05:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:19.726494 | orchestrator | 2026-04-11 01:05:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:19.726547 | orchestrator | 2026-04-11 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:22.769277 | orchestrator | 2026-04-11 01:05:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:22.770477 | orchestrator | 2026-04-11 01:05:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:22.770512 | orchestrator | 2026-04-11 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:25.816258 | orchestrator | 2026-04-11 01:05:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:25.818172 | orchestrator | 2026-04-11 01:05:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:25.818253 | orchestrator | 2026-04-11 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:28.862149 | orchestrator | 2026-04-11 01:05:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:28.863670 | orchestrator | 2026-04-11 01:05:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:28.863718 | orchestrator | 2026-04-11 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:31.908765 | orchestrator | 2026-04-11 01:05:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:31.911099 | orchestrator | 2026-04-11 01:05:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:31.911235 | orchestrator | 2026-04-11 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:34.952918 | orchestrator | 2026-04-11 01:05:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:34.955228 | orchestrator | 2026-04-11 01:05:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:34.955324 | orchestrator | 2026-04-11 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:38.003475 | orchestrator | 2026-04-11 01:05:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:38.007282 | orchestrator | 2026-04-11 01:05:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:38.007327 | orchestrator | 2026-04-11 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:41.053507 | orchestrator | 2026-04-11 01:05:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:41.053968 | orchestrator | 2026-04-11 01:05:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:41.054187 | orchestrator | 2026-04-11 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:44.098370 | orchestrator | 2026-04-11 01:05:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:44.099081 | orchestrator | 2026-04-11 01:05:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:44.099100 | orchestrator | 2026-04-11 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:47.142119 | orchestrator | 2026-04-11 01:05:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:47.144128 | orchestrator | 2026-04-11 01:05:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:47.144204 | orchestrator | 2026-04-11 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:50.185688 | orchestrator | 2026-04-11 01:05:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:50.189002 | orchestrator | 2026-04-11 01:05:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:50.189069 | orchestrator | 2026-04-11 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:53.227532 | orchestrator | 2026-04-11 01:05:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:53.229151 | orchestrator | 2026-04-11 01:05:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:53.229212 | orchestrator | 2026-04-11 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:56.272896 | orchestrator | 2026-04-11 01:05:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:56.274737 | orchestrator | 2026-04-11 01:05:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:56.274788 | orchestrator | 2026-04-11 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:05:59.322535 | orchestrator | 2026-04-11 01:05:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:05:59.324039 | orchestrator | 2026-04-11 01:05:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:05:59.324080 | orchestrator | 2026-04-11 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:02.364313 | orchestrator | 2026-04-11 01:06:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:02.365779 | orchestrator | 2026-04-11 01:06:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:02.365811 | orchestrator | 2026-04-11 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:05.411274 | orchestrator | 2026-04-11 01:06:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:05.413384 | orchestrator | 2026-04-11 01:06:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:05.413436 | orchestrator | 2026-04-11 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:08.453891 | orchestrator | 2026-04-11 01:06:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:08.455596 | orchestrator | 2026-04-11 01:06:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:08.455651 | orchestrator | 2026-04-11 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:11.502726 | orchestrator | 2026-04-11 01:06:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:11.506188 | orchestrator | 2026-04-11 01:06:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:11.506235 | orchestrator | 2026-04-11 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:14.554985 | orchestrator | 2026-04-11 01:06:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:14.556855 | orchestrator | 2026-04-11 01:06:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:14.556894 | orchestrator | 2026-04-11 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:17.613193 | orchestrator | 2026-04-11 01:06:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:17.613247 | orchestrator | 2026-04-11 01:06:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:17.613835 | orchestrator | 2026-04-11 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:20.660221 | orchestrator | 2026-04-11 01:06:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:20.662486 | orchestrator | 2026-04-11 01:06:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:20.662534 | orchestrator | 2026-04-11 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:23.708310 | orchestrator | 2026-04-11 01:06:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:23.709827 | orchestrator | 2026-04-11 01:06:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:23.709915 | orchestrator | 2026-04-11 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:26.754536 | orchestrator | 2026-04-11 01:06:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:26.755615 | orchestrator | 2026-04-11 01:06:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:26.755664 | orchestrator | 2026-04-11 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:29.804896 | orchestrator | 2026-04-11 01:06:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:29.807219 | orchestrator | 2026-04-11 01:06:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:29.807275 | orchestrator | 2026-04-11 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:32.857605 | orchestrator | 2026-04-11 01:06:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:32.860500 | orchestrator | 2026-04-11 01:06:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:32.860568 | orchestrator | 2026-04-11 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:35.905224 | orchestrator | 2026-04-11 01:06:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:35.907799 | orchestrator | 2026-04-11 01:06:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:35.907869 | orchestrator | 2026-04-11 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:38.949393 | orchestrator | 2026-04-11 01:06:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:38.950800 | orchestrator | 2026-04-11 01:06:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:38.950845 | orchestrator | 2026-04-11 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:41.995533 | orchestrator | 2026-04-11 01:06:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:41.997059 | orchestrator | 2026-04-11 01:06:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:41.997111 | orchestrator | 2026-04-11 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:45.039085 | orchestrator | 2026-04-11 01:06:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:45.040985 | orchestrator | 2026-04-11 01:06:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:45.041040 | orchestrator | 2026-04-11 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:48.083766 | orchestrator | 2026-04-11 01:06:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:48.085757 | orchestrator | 2026-04-11 01:06:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:48.085812 | orchestrator | 2026-04-11 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:51.138349 | orchestrator | 2026-04-11 01:06:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:51.139903 | orchestrator | 2026-04-11 01:06:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:51.139977 | orchestrator | 2026-04-11 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:54.195905 | orchestrator | 2026-04-11 01:06:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:54.198054 | orchestrator | 2026-04-11 01:06:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:54.198193 | orchestrator | 2026-04-11 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:06:57.236511 | orchestrator | 2026-04-11 01:06:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:06:57.238148 | orchestrator | 2026-04-11 01:06:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:06:57.238190 | orchestrator | 2026-04-11 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:00.286975 | orchestrator | 2026-04-11 01:07:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:00.288729 | orchestrator | 2026-04-11 01:07:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:00.288785 | orchestrator | 2026-04-11 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:03.334664 | orchestrator | 2026-04-11 01:07:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:03.337102 | orchestrator | 2026-04-11 01:07:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:03.337406 | orchestrator | 2026-04-11 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:06.379685 | orchestrator | 2026-04-11 01:07:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:06.381493 | orchestrator | 2026-04-11 01:07:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:06.381557 | orchestrator | 2026-04-11 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:09.424765 | orchestrator | 2026-04-11 01:07:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:09.426314 | orchestrator | 2026-04-11 01:07:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:09.426380 | orchestrator | 2026-04-11 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:12.469448 | orchestrator | 2026-04-11 01:07:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:12.472199 | orchestrator | 2026-04-11 01:07:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:12.472250 | orchestrator | 2026-04-11 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:15.517485 | orchestrator | 2026-04-11 01:07:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:15.519816 | orchestrator | 2026-04-11 01:07:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:15.519877 | orchestrator | 2026-04-11 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:18.560585 | orchestrator | 2026-04-11 01:07:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:18.562228 | orchestrator | 2026-04-11 01:07:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:18.562281 | orchestrator | 2026-04-11 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:21.599684 | orchestrator | 2026-04-11 01:07:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:21.602167 | orchestrator | 2026-04-11 01:07:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:21.602385 | orchestrator | 2026-04-11 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:24.651399 | orchestrator | 2026-04-11 01:07:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:24.655042 | orchestrator | 2026-04-11 01:07:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:24.655895 | orchestrator | 2026-04-11 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:27.697333 | orchestrator | 2026-04-11 01:07:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:27.700401 | orchestrator | 2026-04-11 01:07:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:27.700633 | orchestrator | 2026-04-11 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:30.743977 | orchestrator | 2026-04-11 01:07:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:30.745406 | orchestrator | 2026-04-11 01:07:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:30.745443 | orchestrator | 2026-04-11 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:33.796978 | orchestrator | 2026-04-11 01:07:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:33.798434 | orchestrator | 2026-04-11 01:07:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:33.798572 | orchestrator | 2026-04-11 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:36.845037 | orchestrator | 2026-04-11 01:07:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:36.846890 | orchestrator | 2026-04-11 01:07:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:36.846963 | orchestrator | 2026-04-11 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:39.893716 | orchestrator | 2026-04-11 01:07:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:39.895459 | orchestrator | 2026-04-11 01:07:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:39.895512 | orchestrator | 2026-04-11 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:42.939324 | orchestrator | 2026-04-11 01:07:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:42.940653 | orchestrator | 2026-04-11 01:07:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:42.940713 | orchestrator | 2026-04-11 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:45.981736 | orchestrator | 2026-04-11 01:07:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:45.983840 | orchestrator | 2026-04-11 01:07:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:45.983906 | orchestrator | 2026-04-11 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:49.026855 | orchestrator | 2026-04-11 01:07:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:49.028188 | orchestrator | 2026-04-11 01:07:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:49.028227 | orchestrator | 2026-04-11 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:52.070407 | orchestrator | 2026-04-11 01:07:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:52.071962 | orchestrator | 2026-04-11 01:07:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:52.071994 | orchestrator | 2026-04-11 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:55.117565 | orchestrator | 2026-04-11 01:07:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:55.119635 | orchestrator | 2026-04-11 01:07:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:55.119737 | orchestrator | 2026-04-11 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:07:58.170571 | orchestrator | 2026-04-11 01:07:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:07:58.172566 | orchestrator | 2026-04-11 01:07:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:07:58.172645 | orchestrator | 2026-04-11 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:01.216754 | orchestrator | 2026-04-11 01:08:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:01.218096 | orchestrator | 2026-04-11 01:08:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:01.218158 | orchestrator | 2026-04-11 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:04.266751 | orchestrator | 2026-04-11 01:08:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:04.268472 | orchestrator | 2026-04-11 01:08:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:04.268528 | orchestrator | 2026-04-11 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:07.310550 | orchestrator | 2026-04-11 01:08:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:07.312505 | orchestrator | 2026-04-11 01:08:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:07.312572 | orchestrator | 2026-04-11 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:10.352982 | orchestrator | 2026-04-11 01:08:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:10.353828 | orchestrator | 2026-04-11 01:08:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:10.353889 | orchestrator | 2026-04-11 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:13.396671 | orchestrator | 2026-04-11 01:08:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:13.398577 | orchestrator | 2026-04-11 01:08:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:13.398816 | orchestrator | 2026-04-11 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:16.440758 | orchestrator | 2026-04-11 01:08:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:16.442898 | orchestrator | 2026-04-11 01:08:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:16.443575 | orchestrator | 2026-04-11 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:19.485261 | orchestrator | 2026-04-11 01:08:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:19.486307 | orchestrator | 2026-04-11 01:08:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:19.486394 | orchestrator | 2026-04-11 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:22.523387 | orchestrator | 2026-04-11 01:08:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:22.525614 | orchestrator | 2026-04-11 01:08:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:22.525758 | orchestrator | 2026-04-11 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:25.568573 | orchestrator | 2026-04-11 01:08:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:25.570163 | orchestrator | 2026-04-11 01:08:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:25.570211 | orchestrator | 2026-04-11 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:28.615285 | orchestrator | 2026-04-11 01:08:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:28.616655 | orchestrator | 2026-04-11 01:08:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:28.616709 | orchestrator | 2026-04-11 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:31.660435 | orchestrator | 2026-04-11 01:08:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:31.662512 | orchestrator | 2026-04-11 01:08:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:31.662625 | orchestrator | 2026-04-11 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:34.715393 | orchestrator | 2026-04-11 01:08:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:34.716838 | orchestrator | 2026-04-11 01:08:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:34.716918 | orchestrator | 2026-04-11 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:37.759021 | orchestrator | 2026-04-11 01:08:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:37.760970 | orchestrator | 2026-04-11 01:08:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:37.761026 | orchestrator | 2026-04-11 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:40.802011 | orchestrator | 2026-04-11 01:08:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:40.803597 | orchestrator | 2026-04-11 01:08:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:40.803657 | orchestrator | 2026-04-11 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:43.844285 | orchestrator | 2026-04-11 01:08:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:43.846124 | orchestrator | 2026-04-11 01:08:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:43.846174 | orchestrator | 2026-04-11 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:46.891810 | orchestrator | 2026-04-11 01:08:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:46.894970 | orchestrator | 2026-04-11 01:08:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:46.895036 | orchestrator | 2026-04-11 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:49.947393 | orchestrator | 2026-04-11 01:08:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:49.948666 | orchestrator | 2026-04-11 01:08:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:49.948725 | orchestrator | 2026-04-11 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:52.992500 | orchestrator | 2026-04-11 01:08:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:52.994567 | orchestrator | 2026-04-11 01:08:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:52.994648 | orchestrator | 2026-04-11 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:56.036754 | orchestrator | 2026-04-11 01:08:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:56.038226 | orchestrator | 2026-04-11 01:08:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:56.038346 | orchestrator | 2026-04-11 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:08:59.081873 | orchestrator | 2026-04-11 01:08:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:08:59.082770 | orchestrator | 2026-04-11 01:08:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:08:59.082804 | orchestrator | 2026-04-11 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:02.133848 | orchestrator | 2026-04-11 01:09:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:02.135799 | orchestrator | 2026-04-11 01:09:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:02.135881 | orchestrator | 2026-04-11 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:05.183281 | orchestrator | 2026-04-11 01:09:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:05.185553 | orchestrator | 2026-04-11 01:09:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:05.185621 | orchestrator | 2026-04-11 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:08.224123 | orchestrator | 2026-04-11 01:09:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:08.225767 | orchestrator | 2026-04-11 01:09:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:08.225806 | orchestrator | 2026-04-11 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:11.282458 | orchestrator | 2026-04-11 01:09:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:11.284994 | orchestrator | 2026-04-11 01:09:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:11.285045 | orchestrator | 2026-04-11 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:14.330516 | orchestrator | 2026-04-11 01:09:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:14.330574 | orchestrator | 2026-04-11 01:09:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:14.331638 | orchestrator | 2026-04-11 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:17.379302 | orchestrator | 2026-04-11 01:09:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:17.381032 | orchestrator | 2026-04-11 01:09:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:17.381303 | orchestrator | 2026-04-11 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:20.438497 | orchestrator | 2026-04-11 01:09:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:20.440252 | orchestrator | 2026-04-11 01:09:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:20.440316 | orchestrator | 2026-04-11 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:23.493874 | orchestrator | 2026-04-11 01:09:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:23.495377 | orchestrator | 2026-04-11 01:09:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:23.495412 | orchestrator | 2026-04-11 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:26.582985 | orchestrator | 2026-04-11 01:09:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:26.584352 | orchestrator | 2026-04-11 01:09:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:26.584432 | orchestrator | 2026-04-11 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:29.644184 | orchestrator | 2026-04-11 01:09:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:29.644340 | orchestrator | 2026-04-11 01:09:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:29.644550 | orchestrator | 2026-04-11 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:32.692523 | orchestrator | 2026-04-11 01:09:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:32.695702 | orchestrator | 2026-04-11 01:09:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:32.695828 | orchestrator | 2026-04-11 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:35.746311 | orchestrator | 2026-04-11 01:09:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:35.747519 | orchestrator | 2026-04-11 01:09:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:35.747549 | orchestrator | 2026-04-11 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:38.793207 | orchestrator | 2026-04-11 01:09:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:38.795706 | orchestrator | 2026-04-11 01:09:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:38.795803 | orchestrator | 2026-04-11 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:41.837149 | orchestrator | 2026-04-11 01:09:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:41.838573 | orchestrator | 2026-04-11 01:09:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:41.838608 | orchestrator | 2026-04-11 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:44.888174 | orchestrator | 2026-04-11 01:09:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:44.891109 | orchestrator | 2026-04-11 01:09:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:44.891154 | orchestrator | 2026-04-11 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:47.936349 | orchestrator | 2026-04-11 01:09:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:47.938468 | orchestrator | 2026-04-11 01:09:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:47.938570 | orchestrator | 2026-04-11 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:50.982799 | orchestrator | 2026-04-11 01:09:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:50.985936 | orchestrator | 2026-04-11 01:09:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:50.986003 | orchestrator | 2026-04-11 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:54.030703 | orchestrator | 2026-04-11 01:09:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:54.033867 | orchestrator | 2026-04-11 01:09:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:54.033980 | orchestrator | 2026-04-11 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:09:57.085178 | orchestrator | 2026-04-11 01:09:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:09:57.087727 | orchestrator | 2026-04-11 01:09:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:09:57.087801 | orchestrator | 2026-04-11 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:00.134263 | orchestrator | 2026-04-11 01:10:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:00.135525 | orchestrator | 2026-04-11 01:10:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:00.135541 | orchestrator | 2026-04-11 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:03.179044 | orchestrator | 2026-04-11 01:10:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:03.180449 | orchestrator | 2026-04-11 01:10:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:03.180736 | orchestrator | 2026-04-11 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:06.223047 | orchestrator | 2026-04-11 01:10:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:06.224138 | orchestrator | 2026-04-11 01:10:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:06.224185 | orchestrator | 2026-04-11 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:09.278618 | orchestrator | 2026-04-11 01:10:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:09.280545 | orchestrator | 2026-04-11 01:10:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:09.281156 | orchestrator | 2026-04-11 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:12.336762 | orchestrator | 2026-04-11 01:10:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:12.340489 | orchestrator | 2026-04-11 01:10:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:12.340573 | orchestrator | 2026-04-11 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:15.391873 | orchestrator | 2026-04-11 01:10:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:15.393268 | orchestrator | 2026-04-11 01:10:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:15.393327 | orchestrator | 2026-04-11 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:18.442535 | orchestrator | 2026-04-11 01:10:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:18.445141 | orchestrator | 2026-04-11 01:10:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:18.445225 | orchestrator | 2026-04-11 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:21.494358 | orchestrator | 2026-04-11 01:10:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:21.495769 | orchestrator | 2026-04-11 01:10:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:21.495846 | orchestrator | 2026-04-11 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:24.542730 | orchestrator | 2026-04-11 01:10:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:24.544426 | orchestrator | 2026-04-11 01:10:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:24.544493 | orchestrator | 2026-04-11 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:27.598685 | orchestrator | 2026-04-11 01:10:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:27.601523 | orchestrator | 2026-04-11 01:10:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:27.601604 | orchestrator | 2026-04-11 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:30.650672 | orchestrator | 2026-04-11 01:10:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:30.652213 | orchestrator | 2026-04-11 01:10:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:30.652284 | orchestrator | 2026-04-11 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:33.712486 | orchestrator | 2026-04-11 01:10:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:33.713964 | orchestrator | 2026-04-11 01:10:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:33.714005 | orchestrator | 2026-04-11 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:36.765991 | orchestrator | 2026-04-11 01:10:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:36.767832 | orchestrator | 2026-04-11 01:10:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:36.767875 | orchestrator | 2026-04-11 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:39.821582 | orchestrator | 2026-04-11 01:10:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:39.824139 | orchestrator | 2026-04-11 01:10:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:39.824216 | orchestrator | 2026-04-11 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:42.873888 | orchestrator | 2026-04-11 01:10:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:42.875335 | orchestrator | 2026-04-11 01:10:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:42.875397 | orchestrator | 2026-04-11 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:45.921234 | orchestrator | 2026-04-11 01:10:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:45.923483 | orchestrator | 2026-04-11 01:10:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:45.923542 | orchestrator | 2026-04-11 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:48.970757 | orchestrator | 2026-04-11 01:10:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:48.972805 | orchestrator | 2026-04-11 01:10:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:48.972867 | orchestrator | 2026-04-11 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:52.031904 | orchestrator | 2026-04-11 01:10:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:52.033470 | orchestrator | 2026-04-11 01:10:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:52.033539 | orchestrator | 2026-04-11 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:55.077984 | orchestrator | 2026-04-11 01:10:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:55.079187 | orchestrator | 2026-04-11 01:10:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:55.079289 | orchestrator | 2026-04-11 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:10:58.121575 | orchestrator | 2026-04-11 01:10:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:10:58.121730 | orchestrator | 2026-04-11 01:10:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:10:58.121747 | orchestrator | 2026-04-11 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:01.165379 | orchestrator | 2026-04-11 01:11:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:01.166645 | orchestrator | 2026-04-11 01:11:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:01.166689 | orchestrator | 2026-04-11 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:04.210902 | orchestrator | 2026-04-11 01:11:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:04.212750 | orchestrator | 2026-04-11 01:11:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:04.212836 | orchestrator | 2026-04-11 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:07.258542 | orchestrator | 2026-04-11 01:11:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:07.259717 | orchestrator | 2026-04-11 01:11:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:07.259775 | orchestrator | 2026-04-11 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:10.301246 | orchestrator | 2026-04-11 01:11:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:10.303490 | orchestrator | 2026-04-11 01:11:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:10.303577 | orchestrator | 2026-04-11 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:13.346295 | orchestrator | 2026-04-11 01:11:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:13.348131 | orchestrator | 2026-04-11 01:11:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:13.348180 | orchestrator | 2026-04-11 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:16.387566 | orchestrator | 2026-04-11 01:11:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:16.389161 | orchestrator | 2026-04-11 01:11:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:16.389194 | orchestrator | 2026-04-11 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:19.430505 | orchestrator | 2026-04-11 01:11:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:19.432592 | orchestrator | 2026-04-11 01:11:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:19.432644 | orchestrator | 2026-04-11 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:22.488438 | orchestrator | 2026-04-11 01:11:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:22.491899 | orchestrator | 2026-04-11 01:11:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:22.492096 | orchestrator | 2026-04-11 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:25.536418 | orchestrator | 2026-04-11 01:11:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:25.537742 | orchestrator | 2026-04-11 01:11:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:25.537784 | orchestrator | 2026-04-11 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:28.585823 | orchestrator | 2026-04-11 01:11:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:28.587967 | orchestrator | 2026-04-11 01:11:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:28.588095 | orchestrator | 2026-04-11 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:31.635722 | orchestrator | 2026-04-11 01:11:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:31.637233 | orchestrator | 2026-04-11 01:11:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:31.637348 | orchestrator | 2026-04-11 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:34.689461 | orchestrator | 2026-04-11 01:11:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:34.690673 | orchestrator | 2026-04-11 01:11:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:34.690724 | orchestrator | 2026-04-11 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:37.744945 | orchestrator | 2026-04-11 01:11:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:37.746734 | orchestrator | 2026-04-11 01:11:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:37.746837 | orchestrator | 2026-04-11 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:40.793296 | orchestrator | 2026-04-11 01:11:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:40.794967 | orchestrator | 2026-04-11 01:11:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:40.795055 | orchestrator | 2026-04-11 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:43.849601 | orchestrator | 2026-04-11 01:11:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:43.851861 | orchestrator | 2026-04-11 01:11:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:43.852080 | orchestrator | 2026-04-11 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:46.897010 | orchestrator | 2026-04-11 01:11:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:46.898873 | orchestrator | 2026-04-11 01:11:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:46.898932 | orchestrator | 2026-04-11 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:49.954052 | orchestrator | 2026-04-11 01:11:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:49.956846 | orchestrator | 2026-04-11 01:11:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:49.956919 | orchestrator | 2026-04-11 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:53.006522 | orchestrator | 2026-04-11 01:11:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:53.007991 | orchestrator | 2026-04-11 01:11:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:53.008033 | orchestrator | 2026-04-11 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:56.054708 | orchestrator | 2026-04-11 01:11:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:56.057062 | orchestrator | 2026-04-11 01:11:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:56.057127 | orchestrator | 2026-04-11 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:11:59.102632 | orchestrator | 2026-04-11 01:11:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:11:59.102773 | orchestrator | 2026-04-11 01:11:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:11:59.102795 | orchestrator | 2026-04-11 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:02.146415 | orchestrator | 2026-04-11 01:12:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:02.147927 | orchestrator | 2026-04-11 01:12:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:02.148077 | orchestrator | 2026-04-11 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:05.193762 | orchestrator | 2026-04-11 01:12:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:05.195701 | orchestrator | 2026-04-11 01:12:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:05.195756 | orchestrator | 2026-04-11 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:08.241696 | orchestrator | 2026-04-11 01:12:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:08.243464 | orchestrator | 2026-04-11 01:12:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:08.243531 | orchestrator | 2026-04-11 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:11.297472 | orchestrator | 2026-04-11 01:12:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:11.299859 | orchestrator | 2026-04-11 01:12:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:11.300196 | orchestrator | 2026-04-11 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:14.348899 | orchestrator | 2026-04-11 01:12:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:14.351477 | orchestrator | 2026-04-11 01:12:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:14.351526 | orchestrator | 2026-04-11 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:17.403051 | orchestrator | 2026-04-11 01:12:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:17.404422 | orchestrator | 2026-04-11 01:12:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:17.405240 | orchestrator | 2026-04-11 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:20.460044 | orchestrator | 2026-04-11 01:12:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:20.461936 | orchestrator | 2026-04-11 01:12:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:20.462001 | orchestrator | 2026-04-11 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:23.515941 | orchestrator | 2026-04-11 01:12:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:23.518112 | orchestrator | 2026-04-11 01:12:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:23.518159 | orchestrator | 2026-04-11 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:26.562520 | orchestrator | 2026-04-11 01:12:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:26.564169 | orchestrator | 2026-04-11 01:12:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:26.564444 | orchestrator | 2026-04-11 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:29.612148 | orchestrator | 2026-04-11 01:12:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:29.613007 | orchestrator | 2026-04-11 01:12:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:29.613106 | orchestrator | 2026-04-11 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:32.661797 | orchestrator | 2026-04-11 01:12:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:32.662758 | orchestrator | 2026-04-11 01:12:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:32.662830 | orchestrator | 2026-04-11 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:35.706595 | orchestrator | 2026-04-11 01:12:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:35.708089 | orchestrator | 2026-04-11 01:12:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:35.708123 | orchestrator | 2026-04-11 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:38.754367 | orchestrator | 2026-04-11 01:12:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:38.756448 | orchestrator | 2026-04-11 01:12:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:38.756550 | orchestrator | 2026-04-11 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:41.810195 | orchestrator | 2026-04-11 01:12:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:41.811736 | orchestrator | 2026-04-11 01:12:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:41.811827 | orchestrator | 2026-04-11 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:44.854976 | orchestrator | 2026-04-11 01:12:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:44.856641 | orchestrator | 2026-04-11 01:12:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:44.856687 | orchestrator | 2026-04-11 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:47.914736 | orchestrator | 2026-04-11 01:12:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:47.918882 | orchestrator | 2026-04-11 01:12:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:47.919025 | orchestrator | 2026-04-11 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:50.961261 | orchestrator | 2026-04-11 01:12:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:50.964306 | orchestrator | 2026-04-11 01:12:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:50.964365 | orchestrator | 2026-04-11 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:54.005291 | orchestrator | 2026-04-11 01:12:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:54.006898 | orchestrator | 2026-04-11 01:12:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:54.006977 | orchestrator | 2026-04-11 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:12:57.051992 | orchestrator | 2026-04-11 01:12:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:12:57.053272 | orchestrator | 2026-04-11 01:12:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:12:57.053293 | orchestrator | 2026-04-11 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:00.095209 | orchestrator | 2026-04-11 01:13:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:00.095721 | orchestrator | 2026-04-11 01:13:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:00.095813 | orchestrator | 2026-04-11 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:03.159079 | orchestrator | 2026-04-11 01:13:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:03.161226 | orchestrator | 2026-04-11 01:13:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:03.161462 | orchestrator | 2026-04-11 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:06.210513 | orchestrator | 2026-04-11 01:13:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:06.212578 | orchestrator | 2026-04-11 01:13:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:06.212631 | orchestrator | 2026-04-11 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:09.261177 | orchestrator | 2026-04-11 01:13:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:09.262887 | orchestrator | 2026-04-11 01:13:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:09.262962 | orchestrator | 2026-04-11 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:12.316566 | orchestrator | 2026-04-11 01:13:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:12.318851 | orchestrator | 2026-04-11 01:13:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:12.318993 | orchestrator | 2026-04-11 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:15.363740 | orchestrator | 2026-04-11 01:13:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:15.365679 | orchestrator | 2026-04-11 01:13:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:15.365714 | orchestrator | 2026-04-11 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:18.414435 | orchestrator | 2026-04-11 01:13:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:18.415515 | orchestrator | 2026-04-11 01:13:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:18.415578 | orchestrator | 2026-04-11 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:21.487707 | orchestrator | 2026-04-11 01:13:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:21.489572 | orchestrator | 2026-04-11 01:13:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:21.489639 | orchestrator | 2026-04-11 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:24.549285 | orchestrator | 2026-04-11 01:13:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:24.550253 | orchestrator | 2026-04-11 01:13:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:24.550428 | orchestrator | 2026-04-11 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:27.608761 | orchestrator | 2026-04-11 01:13:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:27.610321 | orchestrator | 2026-04-11 01:13:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:27.610371 | orchestrator | 2026-04-11 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:30.656756 | orchestrator | 2026-04-11 01:13:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:30.658107 | orchestrator | 2026-04-11 01:13:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:30.658497 | orchestrator | 2026-04-11 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:33.706224 | orchestrator | 2026-04-11 01:13:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:33.707944 | orchestrator | 2026-04-11 01:13:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:33.708000 | orchestrator | 2026-04-11 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:36.757129 | orchestrator | 2026-04-11 01:13:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:36.759037 | orchestrator | 2026-04-11 01:13:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:36.759108 | orchestrator | 2026-04-11 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:39.807712 | orchestrator | 2026-04-11 01:13:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:39.809854 | orchestrator | 2026-04-11 01:13:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:39.809989 | orchestrator | 2026-04-11 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:42.858965 | orchestrator | 2026-04-11 01:13:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:42.860165 | orchestrator | 2026-04-11 01:13:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:42.860258 | orchestrator | 2026-04-11 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:45.906574 | orchestrator | 2026-04-11 01:13:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:45.908026 | orchestrator | 2026-04-11 01:13:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:45.908124 | orchestrator | 2026-04-11 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:48.955965 | orchestrator | 2026-04-11 01:13:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:48.957781 | orchestrator | 2026-04-11 01:13:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:48.957840 | orchestrator | 2026-04-11 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:52.000541 | orchestrator | 2026-04-11 01:13:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:52.002068 | orchestrator | 2026-04-11 01:13:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:52.002107 | orchestrator | 2026-04-11 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:55.050769 | orchestrator | 2026-04-11 01:13:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:55.053418 | orchestrator | 2026-04-11 01:13:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:55.053520 | orchestrator | 2026-04-11 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:13:58.102539 | orchestrator | 2026-04-11 01:13:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:13:58.103341 | orchestrator | 2026-04-11 01:13:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:13:58.103460 | orchestrator | 2026-04-11 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:01.146809 | orchestrator | 2026-04-11 01:14:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:01.152015 | orchestrator | 2026-04-11 01:14:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:01.152094 | orchestrator | 2026-04-11 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:04.201107 | orchestrator | 2026-04-11 01:14:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:04.204050 | orchestrator | 2026-04-11 01:14:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:04.204139 | orchestrator | 2026-04-11 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:07.246959 | orchestrator | 2026-04-11 01:14:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:07.248400 | orchestrator | 2026-04-11 01:14:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:07.248467 | orchestrator | 2026-04-11 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:10.292024 | orchestrator | 2026-04-11 01:14:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:10.295758 | orchestrator | 2026-04-11 01:14:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:10.295933 | orchestrator | 2026-04-11 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:13.344104 | orchestrator | 2026-04-11 01:14:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:13.345738 | orchestrator | 2026-04-11 01:14:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:13.345814 | orchestrator | 2026-04-11 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:16.386904 | orchestrator | 2026-04-11 01:14:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:16.388206 | orchestrator | 2026-04-11 01:14:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:16.388241 | orchestrator | 2026-04-11 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:19.430929 | orchestrator | 2026-04-11 01:14:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:19.431440 | orchestrator | 2026-04-11 01:14:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:19.431517 | orchestrator | 2026-04-11 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:22.483482 | orchestrator | 2026-04-11 01:14:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:22.484323 | orchestrator | 2026-04-11 01:14:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:22.484372 | orchestrator | 2026-04-11 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:25.533036 | orchestrator | 2026-04-11 01:14:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:25.534939 | orchestrator | 2026-04-11 01:14:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:25.535000 | orchestrator | 2026-04-11 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:28.581055 | orchestrator | 2026-04-11 01:14:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:28.582533 | orchestrator | 2026-04-11 01:14:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:28.582581 | orchestrator | 2026-04-11 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:31.624843 | orchestrator | 2026-04-11 01:14:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:31.628499 | orchestrator | 2026-04-11 01:14:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:31.628854 | orchestrator | 2026-04-11 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:34.676497 | orchestrator | 2026-04-11 01:14:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:34.678414 | orchestrator | 2026-04-11 01:14:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:34.678488 | orchestrator | 2026-04-11 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:37.721507 | orchestrator | 2026-04-11 01:14:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:37.723065 | orchestrator | 2026-04-11 01:14:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:37.723176 | orchestrator | 2026-04-11 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:40.769761 | orchestrator | 2026-04-11 01:14:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:40.771055 | orchestrator | 2026-04-11 01:14:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:40.771270 | orchestrator | 2026-04-11 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:43.825533 | orchestrator | 2026-04-11 01:14:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:43.826851 | orchestrator | 2026-04-11 01:14:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:43.826995 | orchestrator | 2026-04-11 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:46.873275 | orchestrator | 2026-04-11 01:14:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:46.875117 | orchestrator | 2026-04-11 01:14:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:46.875266 | orchestrator | 2026-04-11 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:49.920127 | orchestrator | 2026-04-11 01:14:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:49.921853 | orchestrator | 2026-04-11 01:14:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:49.921889 | orchestrator | 2026-04-11 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:52.970046 | orchestrator | 2026-04-11 01:14:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:52.972008 | orchestrator | 2026-04-11 01:14:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:52.972074 | orchestrator | 2026-04-11 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:56.016918 | orchestrator | 2026-04-11 01:14:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:56.018230 | orchestrator | 2026-04-11 01:14:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:56.018274 | orchestrator | 2026-04-11 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:14:59.070048 | orchestrator | 2026-04-11 01:14:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:14:59.071640 | orchestrator | 2026-04-11 01:14:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:14:59.071675 | orchestrator | 2026-04-11 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:02.115553 | orchestrator | 2026-04-11 01:15:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:02.117338 | orchestrator | 2026-04-11 01:15:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:02.117581 | orchestrator | 2026-04-11 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:05.169344 | orchestrator | 2026-04-11 01:15:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:05.170579 | orchestrator | 2026-04-11 01:15:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:05.170678 | orchestrator | 2026-04-11 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:08.212183 | orchestrator | 2026-04-11 01:15:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:08.213624 | orchestrator | 2026-04-11 01:15:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:08.213715 | orchestrator | 2026-04-11 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:11.262824 | orchestrator | 2026-04-11 01:15:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:11.264346 | orchestrator | 2026-04-11 01:15:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:11.264424 | orchestrator | 2026-04-11 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:14.316218 | orchestrator | 2026-04-11 01:15:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:14.317254 | orchestrator | 2026-04-11 01:15:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:14.317333 | orchestrator | 2026-04-11 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:17.363581 | orchestrator | 2026-04-11 01:15:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:17.365574 | orchestrator | 2026-04-11 01:15:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:17.365626 | orchestrator | 2026-04-11 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:20.414551 | orchestrator | 2026-04-11 01:15:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:20.416446 | orchestrator | 2026-04-11 01:15:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:20.416574 | orchestrator | 2026-04-11 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:23.463377 | orchestrator | 2026-04-11 01:15:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:23.465334 | orchestrator | 2026-04-11 01:15:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:23.465442 | orchestrator | 2026-04-11 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:26.507694 | orchestrator | 2026-04-11 01:15:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:26.509420 | orchestrator | 2026-04-11 01:15:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:26.509488 | orchestrator | 2026-04-11 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:29.551127 | orchestrator | 2026-04-11 01:15:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:29.552476 | orchestrator | 2026-04-11 01:15:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:29.552528 | orchestrator | 2026-04-11 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:32.593466 | orchestrator | 2026-04-11 01:15:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:32.594867 | orchestrator | 2026-04-11 01:15:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:32.594922 | orchestrator | 2026-04-11 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:35.641855 | orchestrator | 2026-04-11 01:15:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:35.643335 | orchestrator | 2026-04-11 01:15:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:35.643441 | orchestrator | 2026-04-11 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:38.692689 | orchestrator | 2026-04-11 01:15:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:38.694605 | orchestrator | 2026-04-11 01:15:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:38.694792 | orchestrator | 2026-04-11 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:41.742719 | orchestrator | 2026-04-11 01:15:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:41.744477 | orchestrator | 2026-04-11 01:15:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:41.744521 | orchestrator | 2026-04-11 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:44.794403 | orchestrator | 2026-04-11 01:15:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:44.796378 | orchestrator | 2026-04-11 01:15:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:44.796440 | orchestrator | 2026-04-11 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:47.841398 | orchestrator | 2026-04-11 01:15:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:47.843021 | orchestrator | 2026-04-11 01:15:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:47.843065 | orchestrator | 2026-04-11 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:50.895182 | orchestrator | 2026-04-11 01:15:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:50.895601 | orchestrator | 2026-04-11 01:15:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:50.895621 | orchestrator | 2026-04-11 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:53.943993 | orchestrator | 2026-04-11 01:15:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:53.968524 | orchestrator | 2026-04-11 01:15:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:53.968614 | orchestrator | 2026-04-11 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:15:57.021458 | orchestrator | 2026-04-11 01:15:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:15:57.023214 | orchestrator | 2026-04-11 01:15:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:15:57.023280 | orchestrator | 2026-04-11 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:00.073909 | orchestrator | 2026-04-11 01:16:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:00.075513 | orchestrator | 2026-04-11 01:16:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:00.075586 | orchestrator | 2026-04-11 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:03.128106 | orchestrator | 2026-04-11 01:16:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:03.130474 | orchestrator | 2026-04-11 01:16:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:03.130605 | orchestrator | 2026-04-11 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:06.174766 | orchestrator | 2026-04-11 01:16:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:06.176643 | orchestrator | 2026-04-11 01:16:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:06.176680 | orchestrator | 2026-04-11 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:09.224685 | orchestrator | 2026-04-11 01:16:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:09.226330 | orchestrator | 2026-04-11 01:16:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:09.226461 | orchestrator | 2026-04-11 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:12.268820 | orchestrator | 2026-04-11 01:16:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:12.273214 | orchestrator | 2026-04-11 01:16:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:12.273281 | orchestrator | 2026-04-11 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:15.319488 | orchestrator | 2026-04-11 01:16:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:15.321349 | orchestrator | 2026-04-11 01:16:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:15.321437 | orchestrator | 2026-04-11 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:18.363603 | orchestrator | 2026-04-11 01:16:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:18.365162 | orchestrator | 2026-04-11 01:16:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:18.365230 | orchestrator | 2026-04-11 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:21.406253 | orchestrator | 2026-04-11 01:16:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:21.408569 | orchestrator | 2026-04-11 01:16:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:21.408630 | orchestrator | 2026-04-11 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:24.453367 | orchestrator | 2026-04-11 01:16:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:24.454822 | orchestrator | 2026-04-11 01:16:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:24.454922 | orchestrator | 2026-04-11 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:27.506834 | orchestrator | 2026-04-11 01:16:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:27.509354 | orchestrator | 2026-04-11 01:16:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:27.509433 | orchestrator | 2026-04-11 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:30.560326 | orchestrator | 2026-04-11 01:16:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:30.562300 | orchestrator | 2026-04-11 01:16:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:30.562449 | orchestrator | 2026-04-11 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:33.609648 | orchestrator | 2026-04-11 01:16:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:33.611221 | orchestrator | 2026-04-11 01:16:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:33.611690 | orchestrator | 2026-04-11 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:36.659119 | orchestrator | 2026-04-11 01:16:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:36.660331 | orchestrator | 2026-04-11 01:16:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:36.660709 | orchestrator | 2026-04-11 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:39.709703 | orchestrator | 2026-04-11 01:16:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:39.711491 | orchestrator | 2026-04-11 01:16:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:39.711545 | orchestrator | 2026-04-11 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:42.755823 | orchestrator | 2026-04-11 01:16:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:42.757356 | orchestrator | 2026-04-11 01:16:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:42.757405 | orchestrator | 2026-04-11 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:45.810206 | orchestrator | 2026-04-11 01:16:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:45.811804 | orchestrator | 2026-04-11 01:16:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:45.811955 | orchestrator | 2026-04-11 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:48.860026 | orchestrator | 2026-04-11 01:16:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:48.861636 | orchestrator | 2026-04-11 01:16:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:48.861718 | orchestrator | 2026-04-11 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:51.913155 | orchestrator | 2026-04-11 01:16:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:51.915365 | orchestrator | 2026-04-11 01:16:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:51.915439 | orchestrator | 2026-04-11 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:54.961595 | orchestrator | 2026-04-11 01:16:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:54.963213 | orchestrator | 2026-04-11 01:16:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:54.963275 | orchestrator | 2026-04-11 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:16:58.008043 | orchestrator | 2026-04-11 01:16:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:16:58.009463 | orchestrator | 2026-04-11 01:16:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:16:58.009507 | orchestrator | 2026-04-11 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:01.050972 | orchestrator | 2026-04-11 01:17:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:01.051033 | orchestrator | 2026-04-11 01:17:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:01.051042 | orchestrator | 2026-04-11 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:04.092061 | orchestrator | 2026-04-11 01:17:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:04.093270 | orchestrator | 2026-04-11 01:17:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:04.093331 | orchestrator | 2026-04-11 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:07.140716 | orchestrator | 2026-04-11 01:17:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:07.142814 | orchestrator | 2026-04-11 01:17:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:07.142858 | orchestrator | 2026-04-11 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:10.188403 | orchestrator | 2026-04-11 01:17:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:10.189837 | orchestrator | 2026-04-11 01:17:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:10.189865 | orchestrator | 2026-04-11 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:13.233476 | orchestrator | 2026-04-11 01:17:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:13.235098 | orchestrator | 2026-04-11 01:17:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:13.235375 | orchestrator | 2026-04-11 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:16.282357 | orchestrator | 2026-04-11 01:17:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:16.283351 | orchestrator | 2026-04-11 01:17:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:16.283384 | orchestrator | 2026-04-11 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:19.335845 | orchestrator | 2026-04-11 01:17:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:19.337341 | orchestrator | 2026-04-11 01:17:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:19.337393 | orchestrator | 2026-04-11 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:22.386711 | orchestrator | 2026-04-11 01:17:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:22.392703 | orchestrator | 2026-04-11 01:17:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:22.392785 | orchestrator | 2026-04-11 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:25.440180 | orchestrator | 2026-04-11 01:17:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:25.441796 | orchestrator | 2026-04-11 01:17:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:25.441874 | orchestrator | 2026-04-11 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:28.477978 | orchestrator | 2026-04-11 01:17:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:28.479454 | orchestrator | 2026-04-11 01:17:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:28.479492 | orchestrator | 2026-04-11 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:31.526265 | orchestrator | 2026-04-11 01:17:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:31.528215 | orchestrator | 2026-04-11 01:17:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:31.528311 | orchestrator | 2026-04-11 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:34.574101 | orchestrator | 2026-04-11 01:17:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:34.575131 | orchestrator | 2026-04-11 01:17:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:34.575191 | orchestrator | 2026-04-11 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:37.627848 | orchestrator | 2026-04-11 01:17:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:37.629164 | orchestrator | 2026-04-11 01:17:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:37.629253 | orchestrator | 2026-04-11 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:40.672150 | orchestrator | 2026-04-11 01:17:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:40.674110 | orchestrator | 2026-04-11 01:17:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:40.674163 | orchestrator | 2026-04-11 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:43.720684 | orchestrator | 2026-04-11 01:17:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:43.722678 | orchestrator | 2026-04-11 01:17:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:43.722761 | orchestrator | 2026-04-11 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:46.777862 | orchestrator | 2026-04-11 01:17:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:46.778707 | orchestrator | 2026-04-11 01:17:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:46.778759 | orchestrator | 2026-04-11 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:49.827799 | orchestrator | 2026-04-11 01:17:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:49.830238 | orchestrator | 2026-04-11 01:17:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:49.830311 | orchestrator | 2026-04-11 01:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:52.875463 | orchestrator | 2026-04-11 01:17:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:52.877743 | orchestrator | 2026-04-11 01:17:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:52.877899 | orchestrator | 2026-04-11 01:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:55.927820 | orchestrator | 2026-04-11 01:17:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:55.929956 | orchestrator | 2026-04-11 01:17:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:55.930046 | orchestrator | 2026-04-11 01:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:17:58.974271 | orchestrator | 2026-04-11 01:17:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:17:58.976801 | orchestrator | 2026-04-11 01:17:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:17:58.976886 | orchestrator | 2026-04-11 01:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:02.028033 | orchestrator | 2026-04-11 01:18:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:02.032150 | orchestrator | 2026-04-11 01:18:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:02.032262 | orchestrator | 2026-04-11 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:05.076225 | orchestrator | 2026-04-11 01:18:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:05.079228 | orchestrator | 2026-04-11 01:18:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:05.079405 | orchestrator | 2026-04-11 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:08.127970 | orchestrator | 2026-04-11 01:18:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:08.128920 | orchestrator | 2026-04-11 01:18:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:08.128983 | orchestrator | 2026-04-11 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:11.169633 | orchestrator | 2026-04-11 01:18:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:11.170697 | orchestrator | 2026-04-11 01:18:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:11.170767 | orchestrator | 2026-04-11 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:14.217415 | orchestrator | 2026-04-11 01:18:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:14.218435 | orchestrator | 2026-04-11 01:18:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:14.218479 | orchestrator | 2026-04-11 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:17.269401 | orchestrator | 2026-04-11 01:18:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:17.270731 | orchestrator | 2026-04-11 01:18:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:17.270770 | orchestrator | 2026-04-11 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:20.318102 | orchestrator | 2026-04-11 01:18:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:20.319048 | orchestrator | 2026-04-11 01:18:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:20.319101 | orchestrator | 2026-04-11 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:23.368891 | orchestrator | 2026-04-11 01:18:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:23.368978 | orchestrator | 2026-04-11 01:18:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:23.368986 | orchestrator | 2026-04-11 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:18:26.411250 | orchestrator | 2026-04-11 01:18:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:18:26.412953 | orchestrator | 2026-04-11 01:18:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:18:26.413028 | orchestrator | 2026-04-11 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:29.568077 | orchestrator | 2026-04-11 01:20:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:29.568182 | orchestrator | 2026-04-11 01:20:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:29.568193 | orchestrator | 2026-04-11 01:20:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:32.613272 | orchestrator | 2026-04-11 01:20:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:32.614299 | orchestrator | 2026-04-11 01:20:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:32.614693 | orchestrator | 2026-04-11 01:20:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:35.656584 | orchestrator | 2026-04-11 01:20:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:35.658558 | orchestrator | 2026-04-11 01:20:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:35.658635 | orchestrator | 2026-04-11 01:20:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:38.698136 | orchestrator | 2026-04-11 01:20:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:38.700018 | orchestrator | 2026-04-11 01:20:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:38.700065 | orchestrator | 2026-04-11 01:20:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:41.747253 | orchestrator | 2026-04-11 01:20:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:41.748697 | orchestrator | 2026-04-11 01:20:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:41.748753 | orchestrator | 2026-04-11 01:20:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:44.792965 | orchestrator | 2026-04-11 01:20:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:44.796034 | orchestrator | 2026-04-11 01:20:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:44.796244 | orchestrator | 2026-04-11 01:20:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:47.834559 | orchestrator | 2026-04-11 01:20:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:47.836474 | orchestrator | 2026-04-11 01:20:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:47.836516 | orchestrator | 2026-04-11 01:20:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:50.881549 | orchestrator | 2026-04-11 01:20:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:50.883300 | orchestrator | 2026-04-11 01:20:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:50.883365 | orchestrator | 2026-04-11 01:20:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:53.925832 | orchestrator | 2026-04-11 01:20:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:53.927596 | orchestrator | 2026-04-11 01:20:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:53.927690 | orchestrator | 2026-04-11 01:20:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:20:56.971714 | orchestrator | 2026-04-11 01:20:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:20:56.971847 | orchestrator | 2026-04-11 01:20:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:20:56.971858 | orchestrator | 2026-04-11 01:20:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:00.021919 | orchestrator | 2026-04-11 01:21:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:00.023903 | orchestrator | 2026-04-11 01:21:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:00.023964 | orchestrator | 2026-04-11 01:21:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:03.065061 | orchestrator | 2026-04-11 01:21:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:03.067223 | orchestrator | 2026-04-11 01:21:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:03.067308 | orchestrator | 2026-04-11 01:21:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:06.112666 | orchestrator | 2026-04-11 01:21:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:06.115047 | orchestrator | 2026-04-11 01:21:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:06.115127 | orchestrator | 2026-04-11 01:21:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:09.156115 | orchestrator | 2026-04-11 01:21:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:09.157693 | orchestrator | 2026-04-11 01:21:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:09.157767 | orchestrator | 2026-04-11 01:21:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:12.198153 | orchestrator | 2026-04-11 01:21:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:12.199528 | orchestrator | 2026-04-11 01:21:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:12.199823 | orchestrator | 2026-04-11 01:21:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:15.239559 | orchestrator | 2026-04-11 01:21:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:15.242488 | orchestrator | 2026-04-11 01:21:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:15.242559 | orchestrator | 2026-04-11 01:21:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:18.281741 | orchestrator | 2026-04-11 01:21:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:18.282669 | orchestrator | 2026-04-11 01:21:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:18.282720 | orchestrator | 2026-04-11 01:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:21.315793 | orchestrator | 2026-04-11 01:21:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:21.317570 | orchestrator | 2026-04-11 01:21:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:21.317627 | orchestrator | 2026-04-11 01:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:24.361956 | orchestrator | 2026-04-11 01:21:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:24.364357 | orchestrator | 2026-04-11 01:21:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:24.364463 | orchestrator | 2026-04-11 01:21:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:27.407813 | orchestrator | 2026-04-11 01:21:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:27.410053 | orchestrator | 2026-04-11 01:21:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:27.410194 | orchestrator | 2026-04-11 01:21:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:30.449710 | orchestrator | 2026-04-11 01:21:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:30.450764 | orchestrator | 2026-04-11 01:21:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:30.450799 | orchestrator | 2026-04-11 01:21:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:33.496367 | orchestrator | 2026-04-11 01:21:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:33.498740 | orchestrator | 2026-04-11 01:21:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:33.498807 | orchestrator | 2026-04-11 01:21:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:36.544797 | orchestrator | 2026-04-11 01:21:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:36.547141 | orchestrator | 2026-04-11 01:21:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:36.547195 | orchestrator | 2026-04-11 01:21:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:39.588068 | orchestrator | 2026-04-11 01:21:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:39.590214 | orchestrator | 2026-04-11 01:21:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:39.590334 | orchestrator | 2026-04-11 01:21:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:42.629137 | orchestrator | 2026-04-11 01:21:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:42.631690 | orchestrator | 2026-04-11 01:21:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:42.631797 | orchestrator | 2026-04-11 01:21:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:45.670048 | orchestrator | 2026-04-11 01:21:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:45.671544 | orchestrator | 2026-04-11 01:21:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:45.671585 | orchestrator | 2026-04-11 01:21:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:48.713993 | orchestrator | 2026-04-11 01:21:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:48.715889 | orchestrator | 2026-04-11 01:21:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:48.715960 | orchestrator | 2026-04-11 01:21:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:51.765052 | orchestrator | 2026-04-11 01:21:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:51.767404 | orchestrator | 2026-04-11 01:21:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:51.767527 | orchestrator | 2026-04-11 01:21:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:54.810813 | orchestrator | 2026-04-11 01:21:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:54.811746 | orchestrator | 2026-04-11 01:21:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:54.812059 | orchestrator | 2026-04-11 01:21:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:21:57.855230 | orchestrator | 2026-04-11 01:21:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:21:57.856535 | orchestrator | 2026-04-11 01:21:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:21:57.856600 | orchestrator | 2026-04-11 01:21:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:00.900819 | orchestrator | 2026-04-11 01:22:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:00.901853 | orchestrator | 2026-04-11 01:22:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:00.901932 | orchestrator | 2026-04-11 01:22:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:03.947608 | orchestrator | 2026-04-11 01:22:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:03.949452 | orchestrator | 2026-04-11 01:22:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:03.949953 | orchestrator | 2026-04-11 01:22:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:06.993575 | orchestrator | 2026-04-11 01:22:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:06.995222 | orchestrator | 2026-04-11 01:22:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:06.995365 | orchestrator | 2026-04-11 01:22:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:10.043407 | orchestrator | 2026-04-11 01:22:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:10.045353 | orchestrator | 2026-04-11 01:22:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:10.045440 | orchestrator | 2026-04-11 01:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:13.086315 | orchestrator | 2026-04-11 01:22:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:13.088208 | orchestrator | 2026-04-11 01:22:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:13.088353 | orchestrator | 2026-04-11 01:22:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:16.133684 | orchestrator | 2026-04-11 01:22:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:16.135120 | orchestrator | 2026-04-11 01:22:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:16.135142 | orchestrator | 2026-04-11 01:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:19.181543 | orchestrator | 2026-04-11 01:22:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:19.183556 | orchestrator | 2026-04-11 01:22:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:19.183681 | orchestrator | 2026-04-11 01:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:22.229667 | orchestrator | 2026-04-11 01:22:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:22.232179 | orchestrator | 2026-04-11 01:22:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:22.232295 | orchestrator | 2026-04-11 01:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:25.275558 | orchestrator | 2026-04-11 01:22:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:25.277320 | orchestrator | 2026-04-11 01:22:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:25.277577 | orchestrator | 2026-04-11 01:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:28.319741 | orchestrator | 2026-04-11 01:22:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:28.321424 | orchestrator | 2026-04-11 01:22:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:28.321757 | orchestrator | 2026-04-11 01:22:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:31.361394 | orchestrator | 2026-04-11 01:22:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:31.362521 | orchestrator | 2026-04-11 01:22:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:31.362575 | orchestrator | 2026-04-11 01:22:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:34.406376 | orchestrator | 2026-04-11 01:22:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:34.408027 | orchestrator | 2026-04-11 01:22:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:34.408073 | orchestrator | 2026-04-11 01:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:37.453489 | orchestrator | 2026-04-11 01:22:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:37.455476 | orchestrator | 2026-04-11 01:22:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:37.455739 | orchestrator | 2026-04-11 01:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:40.500729 | orchestrator | 2026-04-11 01:22:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:40.503636 | orchestrator | 2026-04-11 01:22:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:40.503717 | orchestrator | 2026-04-11 01:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:43.549436 | orchestrator | 2026-04-11 01:22:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:43.551030 | orchestrator | 2026-04-11 01:22:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:43.551087 | orchestrator | 2026-04-11 01:22:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:46.595495 | orchestrator | 2026-04-11 01:22:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:46.597264 | orchestrator | 2026-04-11 01:22:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:46.597351 | orchestrator | 2026-04-11 01:22:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:49.639141 | orchestrator | 2026-04-11 01:22:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:49.641050 | orchestrator | 2026-04-11 01:22:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:49.641104 | orchestrator | 2026-04-11 01:22:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:52.686715 | orchestrator | 2026-04-11 01:22:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:52.688774 | orchestrator | 2026-04-11 01:22:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:52.688813 | orchestrator | 2026-04-11 01:22:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:55.732739 | orchestrator | 2026-04-11 01:22:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:55.734309 | orchestrator | 2026-04-11 01:22:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:55.735007 | orchestrator | 2026-04-11 01:22:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:22:58.775540 | orchestrator | 2026-04-11 01:22:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:22:58.777452 | orchestrator | 2026-04-11 01:22:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:22:58.777555 | orchestrator | 2026-04-11 01:22:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:01.819890 | orchestrator | 2026-04-11 01:23:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:01.821927 | orchestrator | 2026-04-11 01:23:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:01.821989 | orchestrator | 2026-04-11 01:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:04.865892 | orchestrator | 2026-04-11 01:23:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:04.867640 | orchestrator | 2026-04-11 01:23:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:04.867693 | orchestrator | 2026-04-11 01:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:07.913705 | orchestrator | 2026-04-11 01:23:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:07.915568 | orchestrator | 2026-04-11 01:23:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:07.915605 | orchestrator | 2026-04-11 01:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:10.960616 | orchestrator | 2026-04-11 01:23:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:10.961832 | orchestrator | 2026-04-11 01:23:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:10.961907 | orchestrator | 2026-04-11 01:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:14.006509 | orchestrator | 2026-04-11 01:23:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:14.008317 | orchestrator | 2026-04-11 01:23:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:14.008392 | orchestrator | 2026-04-11 01:23:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:17.053620 | orchestrator | 2026-04-11 01:23:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:17.055283 | orchestrator | 2026-04-11 01:23:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:17.055331 | orchestrator | 2026-04-11 01:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:20.093009 | orchestrator | 2026-04-11 01:23:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:20.094842 | orchestrator | 2026-04-11 01:23:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:20.094952 | orchestrator | 2026-04-11 01:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:23.145033 | orchestrator | 2026-04-11 01:23:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:23.146439 | orchestrator | 2026-04-11 01:23:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:23.146495 | orchestrator | 2026-04-11 01:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:26.186334 | orchestrator | 2026-04-11 01:23:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:26.187340 | orchestrator | 2026-04-11 01:23:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:26.187401 | orchestrator | 2026-04-11 01:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:29.231270 | orchestrator | 2026-04-11 01:23:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:29.233401 | orchestrator | 2026-04-11 01:23:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:29.233448 | orchestrator | 2026-04-11 01:23:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:32.277703 | orchestrator | 2026-04-11 01:23:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:32.279562 | orchestrator | 2026-04-11 01:23:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:32.279611 | orchestrator | 2026-04-11 01:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:35.325328 | orchestrator | 2026-04-11 01:23:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:35.327584 | orchestrator | 2026-04-11 01:23:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:35.327686 | orchestrator | 2026-04-11 01:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:38.374622 | orchestrator | 2026-04-11 01:23:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:38.377340 | orchestrator | 2026-04-11 01:23:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:38.377415 | orchestrator | 2026-04-11 01:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:41.422281 | orchestrator | 2026-04-11 01:23:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:41.424742 | orchestrator | 2026-04-11 01:23:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:41.424963 | orchestrator | 2026-04-11 01:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:44.463773 | orchestrator | 2026-04-11 01:23:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:44.466405 | orchestrator | 2026-04-11 01:23:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:44.466733 | orchestrator | 2026-04-11 01:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:47.509374 | orchestrator | 2026-04-11 01:23:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:47.512439 | orchestrator | 2026-04-11 01:23:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:47.512493 | orchestrator | 2026-04-11 01:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:50.551500 | orchestrator | 2026-04-11 01:23:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:50.554389 | orchestrator | 2026-04-11 01:23:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:50.554453 | orchestrator | 2026-04-11 01:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:53.597848 | orchestrator | 2026-04-11 01:23:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:53.599318 | orchestrator | 2026-04-11 01:23:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:53.599400 | orchestrator | 2026-04-11 01:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:56.643004 | orchestrator | 2026-04-11 01:23:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:56.644075 | orchestrator | 2026-04-11 01:23:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:56.644109 | orchestrator | 2026-04-11 01:23:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:23:59.685780 | orchestrator | 2026-04-11 01:23:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:23:59.688362 | orchestrator | 2026-04-11 01:23:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:23:59.688433 | orchestrator | 2026-04-11 01:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:02.730125 | orchestrator | 2026-04-11 01:24:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:02.731781 | orchestrator | 2026-04-11 01:24:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:02.731831 | orchestrator | 2026-04-11 01:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:05.774549 | orchestrator | 2026-04-11 01:24:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:05.777288 | orchestrator | 2026-04-11 01:24:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:05.777336 | orchestrator | 2026-04-11 01:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:08.822251 | orchestrator | 2026-04-11 01:24:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:08.824174 | orchestrator | 2026-04-11 01:24:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:08.824235 | orchestrator | 2026-04-11 01:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:11.867833 | orchestrator | 2026-04-11 01:24:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:11.869583 | orchestrator | 2026-04-11 01:24:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:11.869697 | orchestrator | 2026-04-11 01:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:14.906316 | orchestrator | 2026-04-11 01:24:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:14.907791 | orchestrator | 2026-04-11 01:24:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:14.907874 | orchestrator | 2026-04-11 01:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:17.948376 | orchestrator | 2026-04-11 01:24:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:17.950365 | orchestrator | 2026-04-11 01:24:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:17.950450 | orchestrator | 2026-04-11 01:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:20.994223 | orchestrator | 2026-04-11 01:24:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:21.002682 | orchestrator | 2026-04-11 01:24:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:21.002802 | orchestrator | 2026-04-11 01:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:24.045568 | orchestrator | 2026-04-11 01:24:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:24.047122 | orchestrator | 2026-04-11 01:24:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:24.047218 | orchestrator | 2026-04-11 01:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:27.090550 | orchestrator | 2026-04-11 01:24:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:27.090704 | orchestrator | 2026-04-11 01:24:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:27.090792 | orchestrator | 2026-04-11 01:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:30.130298 | orchestrator | 2026-04-11 01:24:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:30.132274 | orchestrator | 2026-04-11 01:24:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:30.132390 | orchestrator | 2026-04-11 01:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:33.179105 | orchestrator | 2026-04-11 01:24:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:33.181033 | orchestrator | 2026-04-11 01:24:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:33.181198 | orchestrator | 2026-04-11 01:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:36.222087 | orchestrator | 2026-04-11 01:24:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:36.224233 | orchestrator | 2026-04-11 01:24:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:36.224445 | orchestrator | 2026-04-11 01:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:39.272256 | orchestrator | 2026-04-11 01:24:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:39.274731 | orchestrator | 2026-04-11 01:24:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:39.274829 | orchestrator | 2026-04-11 01:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:42.328648 | orchestrator | 2026-04-11 01:24:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:42.330655 | orchestrator | 2026-04-11 01:24:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:42.330736 | orchestrator | 2026-04-11 01:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:45.370761 | orchestrator | 2026-04-11 01:24:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:45.371552 | orchestrator | 2026-04-11 01:24:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:45.371617 | orchestrator | 2026-04-11 01:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:48.414804 | orchestrator | 2026-04-11 01:24:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:48.416230 | orchestrator | 2026-04-11 01:24:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:48.416712 | orchestrator | 2026-04-11 01:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:51.459124 | orchestrator | 2026-04-11 01:24:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:51.460511 | orchestrator | 2026-04-11 01:24:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:51.460611 | orchestrator | 2026-04-11 01:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:54.505368 | orchestrator | 2026-04-11 01:24:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:54.507707 | orchestrator | 2026-04-11 01:24:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:54.507798 | orchestrator | 2026-04-11 01:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:24:57.554603 | orchestrator | 2026-04-11 01:24:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:24:57.557210 | orchestrator | 2026-04-11 01:24:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:24:57.557301 | orchestrator | 2026-04-11 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:00.598415 | orchestrator | 2026-04-11 01:25:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:00.599410 | orchestrator | 2026-04-11 01:25:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:00.599467 | orchestrator | 2026-04-11 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:03.638437 | orchestrator | 2026-04-11 01:25:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:03.640327 | orchestrator | 2026-04-11 01:25:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:03.640459 | orchestrator | 2026-04-11 01:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:06.681341 | orchestrator | 2026-04-11 01:25:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:06.683234 | orchestrator | 2026-04-11 01:25:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:06.683302 | orchestrator | 2026-04-11 01:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:09.734267 | orchestrator | 2026-04-11 01:25:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:09.735969 | orchestrator | 2026-04-11 01:25:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:09.736071 | orchestrator | 2026-04-11 01:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:12.785278 | orchestrator | 2026-04-11 01:25:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:12.789156 | orchestrator | 2026-04-11 01:25:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:12.789247 | orchestrator | 2026-04-11 01:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:15.838137 | orchestrator | 2026-04-11 01:25:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:15.840554 | orchestrator | 2026-04-11 01:25:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:15.840607 | orchestrator | 2026-04-11 01:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:18.888584 | orchestrator | 2026-04-11 01:25:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:18.890174 | orchestrator | 2026-04-11 01:25:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:18.890242 | orchestrator | 2026-04-11 01:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:21.932516 | orchestrator | 2026-04-11 01:25:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:21.934310 | orchestrator | 2026-04-11 01:25:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:21.934378 | orchestrator | 2026-04-11 01:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:24.977342 | orchestrator | 2026-04-11 01:25:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:24.979453 | orchestrator | 2026-04-11 01:25:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:24.979559 | orchestrator | 2026-04-11 01:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:28.028779 | orchestrator | 2026-04-11 01:25:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:28.029596 | orchestrator | 2026-04-11 01:25:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:28.029676 | orchestrator | 2026-04-11 01:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:31.070722 | orchestrator | 2026-04-11 01:25:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:31.071330 | orchestrator | 2026-04-11 01:25:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:31.071379 | orchestrator | 2026-04-11 01:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:34.122066 | orchestrator | 2026-04-11 01:25:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:34.123815 | orchestrator | 2026-04-11 01:25:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:34.123870 | orchestrator | 2026-04-11 01:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:37.170550 | orchestrator | 2026-04-11 01:25:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:37.172057 | orchestrator | 2026-04-11 01:25:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:37.172116 | orchestrator | 2026-04-11 01:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:40.221639 | orchestrator | 2026-04-11 01:25:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:40.223833 | orchestrator | 2026-04-11 01:25:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:40.223894 | orchestrator | 2026-04-11 01:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:43.269502 | orchestrator | 2026-04-11 01:25:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:43.272554 | orchestrator | 2026-04-11 01:25:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:43.272715 | orchestrator | 2026-04-11 01:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:46.314704 | orchestrator | 2026-04-11 01:25:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:46.316858 | orchestrator | 2026-04-11 01:25:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:46.316922 | orchestrator | 2026-04-11 01:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:49.361148 | orchestrator | 2026-04-11 01:25:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:49.363708 | orchestrator | 2026-04-11 01:25:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:49.363785 | orchestrator | 2026-04-11 01:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:52.409228 | orchestrator | 2026-04-11 01:25:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:52.412921 | orchestrator | 2026-04-11 01:25:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:52.413027 | orchestrator | 2026-04-11 01:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:55.470930 | orchestrator | 2026-04-11 01:25:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:55.472364 | orchestrator | 2026-04-11 01:25:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:55.472413 | orchestrator | 2026-04-11 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:25:58.524777 | orchestrator | 2026-04-11 01:25:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:25:58.527134 | orchestrator | 2026-04-11 01:25:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:25:58.527370 | orchestrator | 2026-04-11 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:01.571598 | orchestrator | 2026-04-11 01:26:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:01.573476 | orchestrator | 2026-04-11 01:26:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:01.573520 | orchestrator | 2026-04-11 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:04.619453 | orchestrator | 2026-04-11 01:26:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:04.622394 | orchestrator | 2026-04-11 01:26:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:04.622521 | orchestrator | 2026-04-11 01:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:07.666814 | orchestrator | 2026-04-11 01:26:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:07.668221 | orchestrator | 2026-04-11 01:26:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:07.668308 | orchestrator | 2026-04-11 01:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:10.716714 | orchestrator | 2026-04-11 01:26:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:10.718528 | orchestrator | 2026-04-11 01:26:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:10.718556 | orchestrator | 2026-04-11 01:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:13.770136 | orchestrator | 2026-04-11 01:26:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:13.772262 | orchestrator | 2026-04-11 01:26:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:13.772400 | orchestrator | 2026-04-11 01:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:16.818377 | orchestrator | 2026-04-11 01:26:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:16.819801 | orchestrator | 2026-04-11 01:26:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:16.819846 | orchestrator | 2026-04-11 01:26:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:19.865342 | orchestrator | 2026-04-11 01:26:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:19.866618 | orchestrator | 2026-04-11 01:26:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:19.866682 | orchestrator | 2026-04-11 01:26:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:22.914655 | orchestrator | 2026-04-11 01:26:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:22.916813 | orchestrator | 2026-04-11 01:26:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:22.916876 | orchestrator | 2026-04-11 01:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:25.966249 | orchestrator | 2026-04-11 01:26:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:25.968555 | orchestrator | 2026-04-11 01:26:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:25.968631 | orchestrator | 2026-04-11 01:26:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:29.020357 | orchestrator | 2026-04-11 01:26:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:29.021864 | orchestrator | 2026-04-11 01:26:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:29.021914 | orchestrator | 2026-04-11 01:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:32.070739 | orchestrator | 2026-04-11 01:26:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:32.073849 | orchestrator | 2026-04-11 01:26:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:32.074702 | orchestrator | 2026-04-11 01:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:35.119894 | orchestrator | 2026-04-11 01:26:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:35.122977 | orchestrator | 2026-04-11 01:26:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:35.123035 | orchestrator | 2026-04-11 01:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:38.167910 | orchestrator | 2026-04-11 01:26:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:38.169880 | orchestrator | 2026-04-11 01:26:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:38.169974 | orchestrator | 2026-04-11 01:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:41.219075 | orchestrator | 2026-04-11 01:26:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:41.220893 | orchestrator | 2026-04-11 01:26:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:41.220949 | orchestrator | 2026-04-11 01:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:44.262251 | orchestrator | 2026-04-11 01:26:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:44.264065 | orchestrator | 2026-04-11 01:26:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:44.264137 | orchestrator | 2026-04-11 01:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:47.312684 | orchestrator | 2026-04-11 01:26:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:47.315199 | orchestrator | 2026-04-11 01:26:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:47.315449 | orchestrator | 2026-04-11 01:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:50.362453 | orchestrator | 2026-04-11 01:26:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:50.364464 | orchestrator | 2026-04-11 01:26:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:50.364518 | orchestrator | 2026-04-11 01:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:53.412508 | orchestrator | 2026-04-11 01:26:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:53.414888 | orchestrator | 2026-04-11 01:26:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:53.415054 | orchestrator | 2026-04-11 01:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:56.457742 | orchestrator | 2026-04-11 01:26:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:56.459475 | orchestrator | 2026-04-11 01:26:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:56.459619 | orchestrator | 2026-04-11 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:26:59.506556 | orchestrator | 2026-04-11 01:26:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:26:59.509376 | orchestrator | 2026-04-11 01:26:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:26:59.509476 | orchestrator | 2026-04-11 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:02.555562 | orchestrator | 2026-04-11 01:27:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:02.558008 | orchestrator | 2026-04-11 01:27:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:02.558123 | orchestrator | 2026-04-11 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:05.604671 | orchestrator | 2026-04-11 01:27:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:05.606538 | orchestrator | 2026-04-11 01:27:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:05.606601 | orchestrator | 2026-04-11 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:08.652962 | orchestrator | 2026-04-11 01:27:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:08.654799 | orchestrator | 2026-04-11 01:27:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:08.654847 | orchestrator | 2026-04-11 01:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:11.702253 | orchestrator | 2026-04-11 01:27:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:11.704416 | orchestrator | 2026-04-11 01:27:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:11.704487 | orchestrator | 2026-04-11 01:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:14.743307 | orchestrator | 2026-04-11 01:27:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:14.744818 | orchestrator | 2026-04-11 01:27:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:14.745042 | orchestrator | 2026-04-11 01:27:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:17.792330 | orchestrator | 2026-04-11 01:27:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:17.793670 | orchestrator | 2026-04-11 01:27:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:17.793700 | orchestrator | 2026-04-11 01:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:20.839579 | orchestrator | 2026-04-11 01:27:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:20.841577 | orchestrator | 2026-04-11 01:27:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:20.841657 | orchestrator | 2026-04-11 01:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:23.892500 | orchestrator | 2026-04-11 01:27:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:23.894110 | orchestrator | 2026-04-11 01:27:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:23.894160 | orchestrator | 2026-04-11 01:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:26.942414 | orchestrator | 2026-04-11 01:27:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:26.943944 | orchestrator | 2026-04-11 01:27:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:26.944013 | orchestrator | 2026-04-11 01:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:29.992577 | orchestrator | 2026-04-11 01:27:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:29.994349 | orchestrator | 2026-04-11 01:27:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:29.994478 | orchestrator | 2026-04-11 01:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:33.051321 | orchestrator | 2026-04-11 01:27:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:33.052565 | orchestrator | 2026-04-11 01:27:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:33.052622 | orchestrator | 2026-04-11 01:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:36.102694 | orchestrator | 2026-04-11 01:27:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:36.105080 | orchestrator | 2026-04-11 01:27:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:36.105155 | orchestrator | 2026-04-11 01:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:39.153557 | orchestrator | 2026-04-11 01:27:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:39.155155 | orchestrator | 2026-04-11 01:27:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:39.155198 | orchestrator | 2026-04-11 01:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:42.198780 | orchestrator | 2026-04-11 01:27:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:42.201170 | orchestrator | 2026-04-11 01:27:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:42.201287 | orchestrator | 2026-04-11 01:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:45.251672 | orchestrator | 2026-04-11 01:27:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:45.253962 | orchestrator | 2026-04-11 01:27:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:45.254085 | orchestrator | 2026-04-11 01:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:48.303920 | orchestrator | 2026-04-11 01:27:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:48.306173 | orchestrator | 2026-04-11 01:27:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:48.306255 | orchestrator | 2026-04-11 01:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:51.343733 | orchestrator | 2026-04-11 01:27:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:51.345562 | orchestrator | 2026-04-11 01:27:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:51.345616 | orchestrator | 2026-04-11 01:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:54.385795 | orchestrator | 2026-04-11 01:27:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:54.387561 | orchestrator | 2026-04-11 01:27:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:54.387627 | orchestrator | 2026-04-11 01:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:27:57.433173 | orchestrator | 2026-04-11 01:27:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:27:57.435021 | orchestrator | 2026-04-11 01:27:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:27:57.435088 | orchestrator | 2026-04-11 01:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:00.487424 | orchestrator | 2026-04-11 01:28:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:00.489015 | orchestrator | 2026-04-11 01:28:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:00.489080 | orchestrator | 2026-04-11 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:03.536612 | orchestrator | 2026-04-11 01:28:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:03.538569 | orchestrator | 2026-04-11 01:28:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:03.538644 | orchestrator | 2026-04-11 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:06.587056 | orchestrator | 2026-04-11 01:28:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:06.588444 | orchestrator | 2026-04-11 01:28:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:06.588485 | orchestrator | 2026-04-11 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:09.638912 | orchestrator | 2026-04-11 01:28:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:09.641998 | orchestrator | 2026-04-11 01:28:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:09.642166 | orchestrator | 2026-04-11 01:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:12.688534 | orchestrator | 2026-04-11 01:28:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:12.689346 | orchestrator | 2026-04-11 01:28:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:12.689400 | orchestrator | 2026-04-11 01:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:15.740511 | orchestrator | 2026-04-11 01:28:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:15.742245 | orchestrator | 2026-04-11 01:28:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:15.742382 | orchestrator | 2026-04-11 01:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:18.790192 | orchestrator | 2026-04-11 01:28:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:18.792170 | orchestrator | 2026-04-11 01:28:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:18.792270 | orchestrator | 2026-04-11 01:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:21.836632 | orchestrator | 2026-04-11 01:28:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:21.837268 | orchestrator | 2026-04-11 01:28:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:21.837304 | orchestrator | 2026-04-11 01:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:24.884365 | orchestrator | 2026-04-11 01:28:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:24.886578 | orchestrator | 2026-04-11 01:28:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:24.886677 | orchestrator | 2026-04-11 01:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:27.935337 | orchestrator | 2026-04-11 01:28:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:27.937553 | orchestrator | 2026-04-11 01:28:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:27.937634 | orchestrator | 2026-04-11 01:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:30.983627 | orchestrator | 2026-04-11 01:28:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:30.985290 | orchestrator | 2026-04-11 01:28:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:30.985337 | orchestrator | 2026-04-11 01:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:34.026542 | orchestrator | 2026-04-11 01:28:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:34.031045 | orchestrator | 2026-04-11 01:28:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:34.031171 | orchestrator | 2026-04-11 01:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:37.071919 | orchestrator | 2026-04-11 01:28:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:37.073962 | orchestrator | 2026-04-11 01:28:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:37.074061 | orchestrator | 2026-04-11 01:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:40.124043 | orchestrator | 2026-04-11 01:28:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:40.126140 | orchestrator | 2026-04-11 01:28:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:40.126220 | orchestrator | 2026-04-11 01:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:43.168102 | orchestrator | 2026-04-11 01:28:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:43.169507 | orchestrator | 2026-04-11 01:28:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:43.169720 | orchestrator | 2026-04-11 01:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:46.216489 | orchestrator | 2026-04-11 01:28:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:46.217557 | orchestrator | 2026-04-11 01:28:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:46.217615 | orchestrator | 2026-04-11 01:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:49.264606 | orchestrator | 2026-04-11 01:28:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:49.265697 | orchestrator | 2026-04-11 01:28:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:49.265786 | orchestrator | 2026-04-11 01:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:52.318307 | orchestrator | 2026-04-11 01:28:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:52.320840 | orchestrator | 2026-04-11 01:28:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:52.320947 | orchestrator | 2026-04-11 01:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:55.370559 | orchestrator | 2026-04-11 01:28:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:55.372866 | orchestrator | 2026-04-11 01:28:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:55.372912 | orchestrator | 2026-04-11 01:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:28:58.419869 | orchestrator | 2026-04-11 01:28:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:28:58.421105 | orchestrator | 2026-04-11 01:28:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:28:58.421164 | orchestrator | 2026-04-11 01:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:01.467291 | orchestrator | 2026-04-11 01:29:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:01.469476 | orchestrator | 2026-04-11 01:29:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:01.469523 | orchestrator | 2026-04-11 01:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:04.522800 | orchestrator | 2026-04-11 01:29:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:04.525025 | orchestrator | 2026-04-11 01:29:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:04.525116 | orchestrator | 2026-04-11 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:07.570352 | orchestrator | 2026-04-11 01:29:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:07.571858 | orchestrator | 2026-04-11 01:29:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:07.571971 | orchestrator | 2026-04-11 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:10.617573 | orchestrator | 2026-04-11 01:29:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:10.620657 | orchestrator | 2026-04-11 01:29:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:10.620789 | orchestrator | 2026-04-11 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:13.670224 | orchestrator | 2026-04-11 01:29:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:13.672153 | orchestrator | 2026-04-11 01:29:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:13.672243 | orchestrator | 2026-04-11 01:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:16.716883 | orchestrator | 2026-04-11 01:29:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:16.718700 | orchestrator | 2026-04-11 01:29:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:16.718836 | orchestrator | 2026-04-11 01:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:19.767069 | orchestrator | 2026-04-11 01:29:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:19.768933 | orchestrator | 2026-04-11 01:29:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:19.768983 | orchestrator | 2026-04-11 01:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:22.815520 | orchestrator | 2026-04-11 01:29:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:22.817214 | orchestrator | 2026-04-11 01:29:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:22.817290 | orchestrator | 2026-04-11 01:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:25.863287 | orchestrator | 2026-04-11 01:29:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:25.864331 | orchestrator | 2026-04-11 01:29:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:25.864412 | orchestrator | 2026-04-11 01:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:28.909374 | orchestrator | 2026-04-11 01:29:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:28.910396 | orchestrator | 2026-04-11 01:29:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:28.910509 | orchestrator | 2026-04-11 01:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:31.951376 | orchestrator | 2026-04-11 01:29:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:31.953260 | orchestrator | 2026-04-11 01:29:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:31.953342 | orchestrator | 2026-04-11 01:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:34.999886 | orchestrator | 2026-04-11 01:29:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:35.001333 | orchestrator | 2026-04-11 01:29:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:35.001381 | orchestrator | 2026-04-11 01:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:38.050630 | orchestrator | 2026-04-11 01:29:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:38.053898 | orchestrator | 2026-04-11 01:29:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:38.054075 | orchestrator | 2026-04-11 01:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:41.105305 | orchestrator | 2026-04-11 01:29:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:41.107550 | orchestrator | 2026-04-11 01:29:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:41.107890 | orchestrator | 2026-04-11 01:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:44.155053 | orchestrator | 2026-04-11 01:29:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:44.157031 | orchestrator | 2026-04-11 01:29:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:44.157110 | orchestrator | 2026-04-11 01:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:47.208369 | orchestrator | 2026-04-11 01:29:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:47.211151 | orchestrator | 2026-04-11 01:29:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:47.211241 | orchestrator | 2026-04-11 01:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:50.259797 | orchestrator | 2026-04-11 01:29:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:50.261423 | orchestrator | 2026-04-11 01:29:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:50.261482 | orchestrator | 2026-04-11 01:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:53.309759 | orchestrator | 2026-04-11 01:29:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:53.311258 | orchestrator | 2026-04-11 01:29:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:53.311468 | orchestrator | 2026-04-11 01:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:56.359711 | orchestrator | 2026-04-11 01:29:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:56.361944 | orchestrator | 2026-04-11 01:29:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:56.362171 | orchestrator | 2026-04-11 01:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:29:59.408617 | orchestrator | 2026-04-11 01:29:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:29:59.410501 | orchestrator | 2026-04-11 01:29:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:29:59.410568 | orchestrator | 2026-04-11 01:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:02.465211 | orchestrator | 2026-04-11 01:30:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:02.466454 | orchestrator | 2026-04-11 01:30:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:02.466572 | orchestrator | 2026-04-11 01:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:05.516012 | orchestrator | 2026-04-11 01:30:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:05.517531 | orchestrator | 2026-04-11 01:30:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:05.517625 | orchestrator | 2026-04-11 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:08.561736 | orchestrator | 2026-04-11 01:30:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:08.563537 | orchestrator | 2026-04-11 01:30:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:08.563626 | orchestrator | 2026-04-11 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:11.607027 | orchestrator | 2026-04-11 01:30:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:11.609226 | orchestrator | 2026-04-11 01:30:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:11.609824 | orchestrator | 2026-04-11 01:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:14.654830 | orchestrator | 2026-04-11 01:30:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:14.656151 | orchestrator | 2026-04-11 01:30:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:14.656197 | orchestrator | 2026-04-11 01:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:17.699812 | orchestrator | 2026-04-11 01:30:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:17.700733 | orchestrator | 2026-04-11 01:30:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:17.700795 | orchestrator | 2026-04-11 01:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:20.752889 | orchestrator | 2026-04-11 01:30:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:20.755564 | orchestrator | 2026-04-11 01:30:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:20.755665 | orchestrator | 2026-04-11 01:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:23.804316 | orchestrator | 2026-04-11 01:30:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:23.806298 | orchestrator | 2026-04-11 01:30:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:23.806380 | orchestrator | 2026-04-11 01:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:26.851648 | orchestrator | 2026-04-11 01:30:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:26.853255 | orchestrator | 2026-04-11 01:30:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:26.853451 | orchestrator | 2026-04-11 01:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:29.894565 | orchestrator | 2026-04-11 01:30:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:29.897072 | orchestrator | 2026-04-11 01:30:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:29.897173 | orchestrator | 2026-04-11 01:30:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:32.944815 | orchestrator | 2026-04-11 01:30:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:32.946990 | orchestrator | 2026-04-11 01:30:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:32.947091 | orchestrator | 2026-04-11 01:30:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:35.998311 | orchestrator | 2026-04-11 01:30:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:36.001024 | orchestrator | 2026-04-11 01:30:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:36.001094 | orchestrator | 2026-04-11 01:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:39.048941 | orchestrator | 2026-04-11 01:30:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:39.051913 | orchestrator | 2026-04-11 01:30:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:39.051993 | orchestrator | 2026-04-11 01:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:42.108879 | orchestrator | 2026-04-11 01:30:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:42.109903 | orchestrator | 2026-04-11 01:30:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:42.109958 | orchestrator | 2026-04-11 01:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:45.157343 | orchestrator | 2026-04-11 01:30:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:45.158462 | orchestrator | 2026-04-11 01:30:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:45.158511 | orchestrator | 2026-04-11 01:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:48.206242 | orchestrator | 2026-04-11 01:30:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:48.208141 | orchestrator | 2026-04-11 01:30:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:48.208193 | orchestrator | 2026-04-11 01:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:51.259456 | orchestrator | 2026-04-11 01:30:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:51.260622 | orchestrator | 2026-04-11 01:30:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:51.260774 | orchestrator | 2026-04-11 01:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:54.309890 | orchestrator | 2026-04-11 01:30:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:54.312040 | orchestrator | 2026-04-11 01:30:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:54.312110 | orchestrator | 2026-04-11 01:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:30:57.361099 | orchestrator | 2026-04-11 01:30:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:30:57.363328 | orchestrator | 2026-04-11 01:30:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:30:57.363414 | orchestrator | 2026-04-11 01:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:00.410523 | orchestrator | 2026-04-11 01:31:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:00.411563 | orchestrator | 2026-04-11 01:31:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:00.411684 | orchestrator | 2026-04-11 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:03.461882 | orchestrator | 2026-04-11 01:31:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:03.462762 | orchestrator | 2026-04-11 01:31:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:03.462813 | orchestrator | 2026-04-11 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:06.510840 | orchestrator | 2026-04-11 01:31:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:06.512277 | orchestrator | 2026-04-11 01:31:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:06.512626 | orchestrator | 2026-04-11 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:09.560737 | orchestrator | 2026-04-11 01:31:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:09.566276 | orchestrator | 2026-04-11 01:31:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:09.566423 | orchestrator | 2026-04-11 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:12.610157 | orchestrator | 2026-04-11 01:31:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:12.611841 | orchestrator | 2026-04-11 01:31:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:12.611909 | orchestrator | 2026-04-11 01:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:15.652827 | orchestrator | 2026-04-11 01:31:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:15.654266 | orchestrator | 2026-04-11 01:31:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:15.654319 | orchestrator | 2026-04-11 01:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:18.706623 | orchestrator | 2026-04-11 01:31:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:18.708512 | orchestrator | 2026-04-11 01:31:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:18.708656 | orchestrator | 2026-04-11 01:31:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:21.759350 | orchestrator | 2026-04-11 01:31:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:21.762164 | orchestrator | 2026-04-11 01:31:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:21.762214 | orchestrator | 2026-04-11 01:31:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:24.803303 | orchestrator | 2026-04-11 01:31:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:24.804164 | orchestrator | 2026-04-11 01:31:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:24.804499 | orchestrator | 2026-04-11 01:31:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:27.846223 | orchestrator | 2026-04-11 01:31:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:27.847416 | orchestrator | 2026-04-11 01:31:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:27.847480 | orchestrator | 2026-04-11 01:31:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:30.891759 | orchestrator | 2026-04-11 01:31:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:30.893994 | orchestrator | 2026-04-11 01:31:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:30.894094 | orchestrator | 2026-04-11 01:31:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:33.939242 | orchestrator | 2026-04-11 01:31:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:33.940978 | orchestrator | 2026-04-11 01:31:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:33.941048 | orchestrator | 2026-04-11 01:31:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:36.992581 | orchestrator | 2026-04-11 01:31:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:36.994208 | orchestrator | 2026-04-11 01:31:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:36.994321 | orchestrator | 2026-04-11 01:31:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:40.038156 | orchestrator | 2026-04-11 01:31:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:40.040716 | orchestrator | 2026-04-11 01:31:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:40.041008 | orchestrator | 2026-04-11 01:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:43.080860 | orchestrator | 2026-04-11 01:31:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:43.081886 | orchestrator | 2026-04-11 01:31:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:43.081948 | orchestrator | 2026-04-11 01:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:46.125345 | orchestrator | 2026-04-11 01:31:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:46.127160 | orchestrator | 2026-04-11 01:31:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:46.127253 | orchestrator | 2026-04-11 01:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:49.171642 | orchestrator | 2026-04-11 01:31:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:49.172884 | orchestrator | 2026-04-11 01:31:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:49.172997 | orchestrator | 2026-04-11 01:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:52.224303 | orchestrator | 2026-04-11 01:31:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:52.225800 | orchestrator | 2026-04-11 01:31:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:52.225880 | orchestrator | 2026-04-11 01:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:55.273981 | orchestrator | 2026-04-11 01:31:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:55.275144 | orchestrator | 2026-04-11 01:31:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:55.275205 | orchestrator | 2026-04-11 01:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:31:58.319434 | orchestrator | 2026-04-11 01:31:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:31:58.320921 | orchestrator | 2026-04-11 01:31:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:31:58.320962 | orchestrator | 2026-04-11 01:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:01.373667 | orchestrator | 2026-04-11 01:32:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:01.375143 | orchestrator | 2026-04-11 01:32:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:01.375169 | orchestrator | 2026-04-11 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:04.424156 | orchestrator | 2026-04-11 01:32:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:04.426210 | orchestrator | 2026-04-11 01:32:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:04.426290 | orchestrator | 2026-04-11 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:07.475079 | orchestrator | 2026-04-11 01:32:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:07.477903 | orchestrator | 2026-04-11 01:32:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:07.477965 | orchestrator | 2026-04-11 01:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:10.523476 | orchestrator | 2026-04-11 01:32:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:10.526127 | orchestrator | 2026-04-11 01:32:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:10.526215 | orchestrator | 2026-04-11 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:13.569756 | orchestrator | 2026-04-11 01:32:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:13.571446 | orchestrator | 2026-04-11 01:32:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:13.571605 | orchestrator | 2026-04-11 01:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:16.615332 | orchestrator | 2026-04-11 01:32:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:16.616887 | orchestrator | 2026-04-11 01:32:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:16.617052 | orchestrator | 2026-04-11 01:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:19.665190 | orchestrator | 2026-04-11 01:32:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:19.668435 | orchestrator | 2026-04-11 01:32:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:19.668577 | orchestrator | 2026-04-11 01:32:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:22.716501 | orchestrator | 2026-04-11 01:32:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:22.721892 | orchestrator | 2026-04-11 01:32:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:22.721963 | orchestrator | 2026-04-11 01:32:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:25.772124 | orchestrator | 2026-04-11 01:32:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:25.776619 | orchestrator | 2026-04-11 01:32:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:25.776725 | orchestrator | 2026-04-11 01:32:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:28.825844 | orchestrator | 2026-04-11 01:32:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:28.827965 | orchestrator | 2026-04-11 01:32:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:28.828050 | orchestrator | 2026-04-11 01:32:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:31.880889 | orchestrator | 2026-04-11 01:32:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:31.882617 | orchestrator | 2026-04-11 01:32:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:31.882678 | orchestrator | 2026-04-11 01:32:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:34.930671 | orchestrator | 2026-04-11 01:32:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:34.932599 | orchestrator | 2026-04-11 01:32:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:34.932675 | orchestrator | 2026-04-11 01:32:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:37.978402 | orchestrator | 2026-04-11 01:32:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:37.980626 | orchestrator | 2026-04-11 01:32:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:37.980684 | orchestrator | 2026-04-11 01:32:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:41.033068 | orchestrator | 2026-04-11 01:32:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:41.035484 | orchestrator | 2026-04-11 01:32:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:41.035589 | orchestrator | 2026-04-11 01:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:44.082222 | orchestrator | 2026-04-11 01:32:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:44.083407 | orchestrator | 2026-04-11 01:32:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:44.083455 | orchestrator | 2026-04-11 01:32:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:47.126230 | orchestrator | 2026-04-11 01:32:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:47.127705 | orchestrator | 2026-04-11 01:32:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:47.127776 | orchestrator | 2026-04-11 01:32:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:50.172135 | orchestrator | 2026-04-11 01:32:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:50.173358 | orchestrator | 2026-04-11 01:32:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:50.173485 | orchestrator | 2026-04-11 01:32:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:53.216067 | orchestrator | 2026-04-11 01:32:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:53.218374 | orchestrator | 2026-04-11 01:32:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:53.218472 | orchestrator | 2026-04-11 01:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:56.265116 | orchestrator | 2026-04-11 01:32:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:56.266950 | orchestrator | 2026-04-11 01:32:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:56.267016 | orchestrator | 2026-04-11 01:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:32:59.314691 | orchestrator | 2026-04-11 01:32:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:32:59.316292 | orchestrator | 2026-04-11 01:32:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:32:59.316490 | orchestrator | 2026-04-11 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:02.358396 | orchestrator | 2026-04-11 01:33:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:02.359733 | orchestrator | 2026-04-11 01:33:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:02.359767 | orchestrator | 2026-04-11 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:05.404179 | orchestrator | 2026-04-11 01:33:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:05.406589 | orchestrator | 2026-04-11 01:33:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:05.406683 | orchestrator | 2026-04-11 01:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:08.454892 | orchestrator | 2026-04-11 01:33:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:08.457564 | orchestrator | 2026-04-11 01:33:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:08.457659 | orchestrator | 2026-04-11 01:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:11.507947 | orchestrator | 2026-04-11 01:33:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:11.509841 | orchestrator | 2026-04-11 01:33:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:11.509927 | orchestrator | 2026-04-11 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:14.557732 | orchestrator | 2026-04-11 01:33:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:14.559816 | orchestrator | 2026-04-11 01:33:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:14.559861 | orchestrator | 2026-04-11 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:17.606896 | orchestrator | 2026-04-11 01:33:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:17.608759 | orchestrator | 2026-04-11 01:33:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:17.608835 | orchestrator | 2026-04-11 01:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:20.656843 | orchestrator | 2026-04-11 01:33:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:20.658479 | orchestrator | 2026-04-11 01:33:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:20.658541 | orchestrator | 2026-04-11 01:33:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:23.708191 | orchestrator | 2026-04-11 01:33:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:23.709703 | orchestrator | 2026-04-11 01:33:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:23.709763 | orchestrator | 2026-04-11 01:33:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:26.759310 | orchestrator | 2026-04-11 01:33:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:26.761473 | orchestrator | 2026-04-11 01:33:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:26.761578 | orchestrator | 2026-04-11 01:33:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:29.806992 | orchestrator | 2026-04-11 01:33:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:29.808635 | orchestrator | 2026-04-11 01:33:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:29.808714 | orchestrator | 2026-04-11 01:33:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:32.848694 | orchestrator | 2026-04-11 01:33:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:32.850220 | orchestrator | 2026-04-11 01:33:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:32.850273 | orchestrator | 2026-04-11 01:33:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:35.892000 | orchestrator | 2026-04-11 01:33:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:35.893693 | orchestrator | 2026-04-11 01:33:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:35.893762 | orchestrator | 2026-04-11 01:33:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:38.934261 | orchestrator | 2026-04-11 01:33:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:38.936600 | orchestrator | 2026-04-11 01:33:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:38.937292 | orchestrator | 2026-04-11 01:33:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:41.985995 | orchestrator | 2026-04-11 01:33:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:41.987237 | orchestrator | 2026-04-11 01:33:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:41.987290 | orchestrator | 2026-04-11 01:33:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:45.036180 | orchestrator | 2026-04-11 01:33:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:45.037996 | orchestrator | 2026-04-11 01:33:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:45.038086 | orchestrator | 2026-04-11 01:33:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:48.084788 | orchestrator | 2026-04-11 01:33:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:48.086374 | orchestrator | 2026-04-11 01:33:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:48.086419 | orchestrator | 2026-04-11 01:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:51.137626 | orchestrator | 2026-04-11 01:33:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:51.140410 | orchestrator | 2026-04-11 01:33:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:51.140968 | orchestrator | 2026-04-11 01:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:54.185825 | orchestrator | 2026-04-11 01:33:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:54.187463 | orchestrator | 2026-04-11 01:33:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:54.187525 | orchestrator | 2026-04-11 01:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:33:57.234534 | orchestrator | 2026-04-11 01:33:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:33:57.235873 | orchestrator | 2026-04-11 01:33:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:33:57.235920 | orchestrator | 2026-04-11 01:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:00.273664 | orchestrator | 2026-04-11 01:34:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:00.275502 | orchestrator | 2026-04-11 01:34:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:00.275616 | orchestrator | 2026-04-11 01:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:03.326286 | orchestrator | 2026-04-11 01:34:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:03.328585 | orchestrator | 2026-04-11 01:34:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:03.328645 | orchestrator | 2026-04-11 01:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:06.380727 | orchestrator | 2026-04-11 01:34:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:06.384659 | orchestrator | 2026-04-11 01:34:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:06.384723 | orchestrator | 2026-04-11 01:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:09.433651 | orchestrator | 2026-04-11 01:34:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:09.435909 | orchestrator | 2026-04-11 01:34:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:09.435991 | orchestrator | 2026-04-11 01:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:12.480607 | orchestrator | 2026-04-11 01:34:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:12.483556 | orchestrator | 2026-04-11 01:34:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:12.483629 | orchestrator | 2026-04-11 01:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:15.525849 | orchestrator | 2026-04-11 01:34:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:15.527142 | orchestrator | 2026-04-11 01:34:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:15.527195 | orchestrator | 2026-04-11 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:18.574412 | orchestrator | 2026-04-11 01:34:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:18.575573 | orchestrator | 2026-04-11 01:34:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:18.575672 | orchestrator | 2026-04-11 01:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:21.617345 | orchestrator | 2026-04-11 01:34:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:21.618729 | orchestrator | 2026-04-11 01:34:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:21.618767 | orchestrator | 2026-04-11 01:34:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:24.666712 | orchestrator | 2026-04-11 01:34:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:24.668431 | orchestrator | 2026-04-11 01:34:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:24.668509 | orchestrator | 2026-04-11 01:34:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:27.716518 | orchestrator | 2026-04-11 01:34:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:27.718062 | orchestrator | 2026-04-11 01:34:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:27.718189 | orchestrator | 2026-04-11 01:34:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:30.766901 | orchestrator | 2026-04-11 01:34:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:30.768912 | orchestrator | 2026-04-11 01:34:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:30.769021 | orchestrator | 2026-04-11 01:34:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:33.812725 | orchestrator | 2026-04-11 01:34:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:33.814942 | orchestrator | 2026-04-11 01:34:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:33.814990 | orchestrator | 2026-04-11 01:34:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:36.858883 | orchestrator | 2026-04-11 01:34:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:36.860615 | orchestrator | 2026-04-11 01:34:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:36.860678 | orchestrator | 2026-04-11 01:34:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:39.904501 | orchestrator | 2026-04-11 01:34:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:39.906437 | orchestrator | 2026-04-11 01:34:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:39.906507 | orchestrator | 2026-04-11 01:34:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:42.952712 | orchestrator | 2026-04-11 01:34:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:42.954296 | orchestrator | 2026-04-11 01:34:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:42.954349 | orchestrator | 2026-04-11 01:34:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:46.001014 | orchestrator | 2026-04-11 01:34:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:46.004342 | orchestrator | 2026-04-11 01:34:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:46.004576 | orchestrator | 2026-04-11 01:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:49.053360 | orchestrator | 2026-04-11 01:34:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:49.055142 | orchestrator | 2026-04-11 01:34:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:49.055186 | orchestrator | 2026-04-11 01:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:52.111314 | orchestrator | 2026-04-11 01:34:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:52.115007 | orchestrator | 2026-04-11 01:34:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:52.115083 | orchestrator | 2026-04-11 01:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:55.160162 | orchestrator | 2026-04-11 01:34:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:55.162363 | orchestrator | 2026-04-11 01:34:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:55.162455 | orchestrator | 2026-04-11 01:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:34:58.209715 | orchestrator | 2026-04-11 01:34:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:34:58.212021 | orchestrator | 2026-04-11 01:34:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:34:58.212100 | orchestrator | 2026-04-11 01:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:01.252910 | orchestrator | 2026-04-11 01:35:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:01.253843 | orchestrator | 2026-04-11 01:35:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:01.253893 | orchestrator | 2026-04-11 01:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:04.304562 | orchestrator | 2026-04-11 01:35:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:04.305986 | orchestrator | 2026-04-11 01:35:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:04.306054 | orchestrator | 2026-04-11 01:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:07.351392 | orchestrator | 2026-04-11 01:35:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:07.353329 | orchestrator | 2026-04-11 01:35:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:07.353392 | orchestrator | 2026-04-11 01:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:10.404236 | orchestrator | 2026-04-11 01:35:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:10.405678 | orchestrator | 2026-04-11 01:35:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:10.405756 | orchestrator | 2026-04-11 01:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:13.450226 | orchestrator | 2026-04-11 01:35:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:13.452257 | orchestrator | 2026-04-11 01:35:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:13.452317 | orchestrator | 2026-04-11 01:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:16.503364 | orchestrator | 2026-04-11 01:35:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:16.505410 | orchestrator | 2026-04-11 01:35:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:16.505476 | orchestrator | 2026-04-11 01:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:19.554011 | orchestrator | 2026-04-11 01:35:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:19.555809 | orchestrator | 2026-04-11 01:35:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:19.555885 | orchestrator | 2026-04-11 01:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:22.603611 | orchestrator | 2026-04-11 01:35:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:22.618004 | orchestrator | 2026-04-11 01:35:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:22.618120 | orchestrator | 2026-04-11 01:35:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:25.663286 | orchestrator | 2026-04-11 01:35:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:25.664980 | orchestrator | 2026-04-11 01:35:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:25.665056 | orchestrator | 2026-04-11 01:35:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:28.714326 | orchestrator | 2026-04-11 01:35:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:28.716250 | orchestrator | 2026-04-11 01:35:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:28.716411 | orchestrator | 2026-04-11 01:35:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:31.762224 | orchestrator | 2026-04-11 01:35:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:31.764235 | orchestrator | 2026-04-11 01:35:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:31.764282 | orchestrator | 2026-04-11 01:35:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:34.813230 | orchestrator | 2026-04-11 01:35:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:34.814239 | orchestrator | 2026-04-11 01:35:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:34.814285 | orchestrator | 2026-04-11 01:35:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:37.862328 | orchestrator | 2026-04-11 01:35:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:37.864330 | orchestrator | 2026-04-11 01:35:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:37.864413 | orchestrator | 2026-04-11 01:35:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:40.911530 | orchestrator | 2026-04-11 01:35:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:40.912386 | orchestrator | 2026-04-11 01:35:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:40.912480 | orchestrator | 2026-04-11 01:35:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:43.958664 | orchestrator | 2026-04-11 01:35:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:43.961147 | orchestrator | 2026-04-11 01:35:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:43.961247 | orchestrator | 2026-04-11 01:35:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:47.005013 | orchestrator | 2026-04-11 01:35:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:47.007321 | orchestrator | 2026-04-11 01:35:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:47.007363 | orchestrator | 2026-04-11 01:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:50.058714 | orchestrator | 2026-04-11 01:35:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:50.061044 | orchestrator | 2026-04-11 01:35:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:50.061115 | orchestrator | 2026-04-11 01:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:53.112463 | orchestrator | 2026-04-11 01:35:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:53.113647 | orchestrator | 2026-04-11 01:35:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:53.113696 | orchestrator | 2026-04-11 01:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:56.161083 | orchestrator | 2026-04-11 01:35:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:56.164142 | orchestrator | 2026-04-11 01:35:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:56.164195 | orchestrator | 2026-04-11 01:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:35:59.215116 | orchestrator | 2026-04-11 01:35:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:35:59.218533 | orchestrator | 2026-04-11 01:35:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:35:59.218630 | orchestrator | 2026-04-11 01:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:02.264522 | orchestrator | 2026-04-11 01:36:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:02.266580 | orchestrator | 2026-04-11 01:36:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:02.266752 | orchestrator | 2026-04-11 01:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:05.315382 | orchestrator | 2026-04-11 01:36:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:05.315711 | orchestrator | 2026-04-11 01:36:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:05.315771 | orchestrator | 2026-04-11 01:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:08.365997 | orchestrator | 2026-04-11 01:36:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:08.367249 | orchestrator | 2026-04-11 01:36:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:08.367286 | orchestrator | 2026-04-11 01:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:11.414932 | orchestrator | 2026-04-11 01:36:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:11.416959 | orchestrator | 2026-04-11 01:36:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:11.417139 | orchestrator | 2026-04-11 01:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:14.462252 | orchestrator | 2026-04-11 01:36:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:14.464227 | orchestrator | 2026-04-11 01:36:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:14.464299 | orchestrator | 2026-04-11 01:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:17.506611 | orchestrator | 2026-04-11 01:36:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:17.509623 | orchestrator | 2026-04-11 01:36:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:17.509698 | orchestrator | 2026-04-11 01:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:20.558569 | orchestrator | 2026-04-11 01:36:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:20.559675 | orchestrator | 2026-04-11 01:36:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:20.559907 | orchestrator | 2026-04-11 01:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:23.608021 | orchestrator | 2026-04-11 01:36:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:23.609870 | orchestrator | 2026-04-11 01:36:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:23.609938 | orchestrator | 2026-04-11 01:36:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:26.657127 | orchestrator | 2026-04-11 01:36:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:26.659288 | orchestrator | 2026-04-11 01:36:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:26.659409 | orchestrator | 2026-04-11 01:36:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:29.706488 | orchestrator | 2026-04-11 01:36:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:29.709040 | orchestrator | 2026-04-11 01:36:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:29.709196 | orchestrator | 2026-04-11 01:36:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:32.758388 | orchestrator | 2026-04-11 01:36:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:32.760044 | orchestrator | 2026-04-11 01:36:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:32.760079 | orchestrator | 2026-04-11 01:36:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:35.808629 | orchestrator | 2026-04-11 01:36:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:35.809988 | orchestrator | 2026-04-11 01:36:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:35.810109 | orchestrator | 2026-04-11 01:36:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:38.856247 | orchestrator | 2026-04-11 01:36:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:38.859177 | orchestrator | 2026-04-11 01:36:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:38.859259 | orchestrator | 2026-04-11 01:36:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:41.901028 | orchestrator | 2026-04-11 01:36:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:41.903589 | orchestrator | 2026-04-11 01:36:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:41.903716 | orchestrator | 2026-04-11 01:36:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:44.946120 | orchestrator | 2026-04-11 01:36:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:44.948615 | orchestrator | 2026-04-11 01:36:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:44.948776 | orchestrator | 2026-04-11 01:36:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:47.995359 | orchestrator | 2026-04-11 01:36:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:47.997374 | orchestrator | 2026-04-11 01:36:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:47.997490 | orchestrator | 2026-04-11 01:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:51.045586 | orchestrator | 2026-04-11 01:36:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:51.046684 | orchestrator | 2026-04-11 01:36:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:51.046779 | orchestrator | 2026-04-11 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:54.092148 | orchestrator | 2026-04-11 01:36:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:54.093769 | orchestrator | 2026-04-11 01:36:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:54.093835 | orchestrator | 2026-04-11 01:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:36:57.144793 | orchestrator | 2026-04-11 01:36:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:36:57.146161 | orchestrator | 2026-04-11 01:36:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:36:57.146354 | orchestrator | 2026-04-11 01:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:00.191689 | orchestrator | 2026-04-11 01:37:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:00.194613 | orchestrator | 2026-04-11 01:37:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:00.195159 | orchestrator | 2026-04-11 01:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:03.242273 | orchestrator | 2026-04-11 01:37:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:03.243393 | orchestrator | 2026-04-11 01:37:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:03.243439 | orchestrator | 2026-04-11 01:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:06.295154 | orchestrator | 2026-04-11 01:37:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:06.296986 | orchestrator | 2026-04-11 01:37:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:06.297047 | orchestrator | 2026-04-11 01:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:09.339442 | orchestrator | 2026-04-11 01:37:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:09.342299 | orchestrator | 2026-04-11 01:37:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:09.342403 | orchestrator | 2026-04-11 01:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:12.390182 | orchestrator | 2026-04-11 01:37:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:12.391863 | orchestrator | 2026-04-11 01:37:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:12.392024 | orchestrator | 2026-04-11 01:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:15.436861 | orchestrator | 2026-04-11 01:37:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:15.437877 | orchestrator | 2026-04-11 01:37:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:15.437941 | orchestrator | 2026-04-11 01:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:18.481935 | orchestrator | 2026-04-11 01:37:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:18.483445 | orchestrator | 2026-04-11 01:37:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:18.483485 | orchestrator | 2026-04-11 01:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:21.527275 | orchestrator | 2026-04-11 01:37:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:21.530323 | orchestrator | 2026-04-11 01:37:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:21.530408 | orchestrator | 2026-04-11 01:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:24.573849 | orchestrator | 2026-04-11 01:37:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:24.575267 | orchestrator | 2026-04-11 01:37:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:24.575345 | orchestrator | 2026-04-11 01:37:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:27.625292 | orchestrator | 2026-04-11 01:37:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:27.628856 | orchestrator | 2026-04-11 01:37:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:27.629430 | orchestrator | 2026-04-11 01:37:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:30.679737 | orchestrator | 2026-04-11 01:37:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:30.681029 | orchestrator | 2026-04-11 01:37:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:30.681083 | orchestrator | 2026-04-11 01:37:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:33.723172 | orchestrator | 2026-04-11 01:37:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:33.725032 | orchestrator | 2026-04-11 01:37:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:33.725093 | orchestrator | 2026-04-11 01:37:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:36.773178 | orchestrator | 2026-04-11 01:37:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:36.773912 | orchestrator | 2026-04-11 01:37:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:36.773994 | orchestrator | 2026-04-11 01:37:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:39.820127 | orchestrator | 2026-04-11 01:37:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:39.822763 | orchestrator | 2026-04-11 01:37:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:39.822823 | orchestrator | 2026-04-11 01:37:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:42.867312 | orchestrator | 2026-04-11 01:37:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:42.869165 | orchestrator | 2026-04-11 01:37:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:42.869246 | orchestrator | 2026-04-11 01:37:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:45.911907 | orchestrator | 2026-04-11 01:37:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:45.913899 | orchestrator | 2026-04-11 01:37:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:45.913938 | orchestrator | 2026-04-11 01:37:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:48.959677 | orchestrator | 2026-04-11 01:37:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:48.961354 | orchestrator | 2026-04-11 01:37:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:48.961367 | orchestrator | 2026-04-11 01:37:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:52.016299 | orchestrator | 2026-04-11 01:37:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:52.017269 | orchestrator | 2026-04-11 01:37:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:52.017289 | orchestrator | 2026-04-11 01:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:55.080276 | orchestrator | 2026-04-11 01:37:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:55.081418 | orchestrator | 2026-04-11 01:37:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:55.081468 | orchestrator | 2026-04-11 01:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:37:58.126794 | orchestrator | 2026-04-11 01:37:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:37:58.128441 | orchestrator | 2026-04-11 01:37:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:37:58.128459 | orchestrator | 2026-04-11 01:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:01.171123 | orchestrator | 2026-04-11 01:38:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:01.173169 | orchestrator | 2026-04-11 01:38:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:01.173187 | orchestrator | 2026-04-11 01:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:04.219120 | orchestrator | 2026-04-11 01:38:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:04.220245 | orchestrator | 2026-04-11 01:38:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:04.220378 | orchestrator | 2026-04-11 01:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:07.271592 | orchestrator | 2026-04-11 01:38:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:07.272874 | orchestrator | 2026-04-11 01:38:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:07.272968 | orchestrator | 2026-04-11 01:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:10.321222 | orchestrator | 2026-04-11 01:38:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:10.323977 | orchestrator | 2026-04-11 01:38:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:10.324100 | orchestrator | 2026-04-11 01:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:13.373879 | orchestrator | 2026-04-11 01:38:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:13.375449 | orchestrator | 2026-04-11 01:38:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:13.375715 | orchestrator | 2026-04-11 01:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:16.424510 | orchestrator | 2026-04-11 01:38:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:16.427046 | orchestrator | 2026-04-11 01:38:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:16.427093 | orchestrator | 2026-04-11 01:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:19.472588 | orchestrator | 2026-04-11 01:38:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:19.474201 | orchestrator | 2026-04-11 01:38:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:19.474276 | orchestrator | 2026-04-11 01:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:22.524701 | orchestrator | 2026-04-11 01:38:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:22.527048 | orchestrator | 2026-04-11 01:38:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:22.527101 | orchestrator | 2026-04-11 01:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:25.581926 | orchestrator | 2026-04-11 01:38:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:25.583105 | orchestrator | 2026-04-11 01:38:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:25.583165 | orchestrator | 2026-04-11 01:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:28.634983 | orchestrator | 2026-04-11 01:38:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:28.636919 | orchestrator | 2026-04-11 01:38:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:28.637148 | orchestrator | 2026-04-11 01:38:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:31.679395 | orchestrator | 2026-04-11 01:38:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:31.680956 | orchestrator | 2026-04-11 01:38:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:31.681017 | orchestrator | 2026-04-11 01:38:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:34.726346 | orchestrator | 2026-04-11 01:38:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:34.727823 | orchestrator | 2026-04-11 01:38:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:34.727950 | orchestrator | 2026-04-11 01:38:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:37.769021 | orchestrator | 2026-04-11 01:38:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:37.770385 | orchestrator | 2026-04-11 01:38:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:37.770435 | orchestrator | 2026-04-11 01:38:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:40.819279 | orchestrator | 2026-04-11 01:38:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:40.822001 | orchestrator | 2026-04-11 01:38:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:40.822121 | orchestrator | 2026-04-11 01:38:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:43.869210 | orchestrator | 2026-04-11 01:38:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:43.872254 | orchestrator | 2026-04-11 01:38:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:43.872325 | orchestrator | 2026-04-11 01:38:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:46.926358 | orchestrator | 2026-04-11 01:38:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:46.928223 | orchestrator | 2026-04-11 01:38:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:46.928281 | orchestrator | 2026-04-11 01:38:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:49.975473 | orchestrator | 2026-04-11 01:38:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:49.978132 | orchestrator | 2026-04-11 01:38:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:49.978334 | orchestrator | 2026-04-11 01:38:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:53.020364 | orchestrator | 2026-04-11 01:38:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:53.268119 | orchestrator | 2026-04-11 01:38:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:53.268195 | orchestrator | 2026-04-11 01:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:56.076676 | orchestrator | 2026-04-11 01:38:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:56.077858 | orchestrator | 2026-04-11 01:38:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:56.078007 | orchestrator | 2026-04-11 01:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:38:59.126445 | orchestrator | 2026-04-11 01:38:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:38:59.127775 | orchestrator | 2026-04-11 01:38:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:38:59.127824 | orchestrator | 2026-04-11 01:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:02.176722 | orchestrator | 2026-04-11 01:39:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:02.178582 | orchestrator | 2026-04-11 01:39:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:02.178854 | orchestrator | 2026-04-11 01:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:05.229008 | orchestrator | 2026-04-11 01:39:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:05.232290 | orchestrator | 2026-04-11 01:39:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:05.232415 | orchestrator | 2026-04-11 01:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:08.275952 | orchestrator | 2026-04-11 01:39:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:08.277791 | orchestrator | 2026-04-11 01:39:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:08.277858 | orchestrator | 2026-04-11 01:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:11.330555 | orchestrator | 2026-04-11 01:39:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:11.332294 | orchestrator | 2026-04-11 01:39:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:11.332371 | orchestrator | 2026-04-11 01:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:14.377720 | orchestrator | 2026-04-11 01:39:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:14.380540 | orchestrator | 2026-04-11 01:39:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:14.380617 | orchestrator | 2026-04-11 01:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:17.433273 | orchestrator | 2026-04-11 01:39:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:17.436538 | orchestrator | 2026-04-11 01:39:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:17.436607 | orchestrator | 2026-04-11 01:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:20.481486 | orchestrator | 2026-04-11 01:39:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:20.482771 | orchestrator | 2026-04-11 01:39:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:20.482829 | orchestrator | 2026-04-11 01:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:23.529628 | orchestrator | 2026-04-11 01:39:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:23.530118 | orchestrator | 2026-04-11 01:39:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:23.530163 | orchestrator | 2026-04-11 01:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:26.580309 | orchestrator | 2026-04-11 01:39:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:26.581473 | orchestrator | 2026-04-11 01:39:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:26.581520 | orchestrator | 2026-04-11 01:39:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:29.629118 | orchestrator | 2026-04-11 01:39:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:29.631183 | orchestrator | 2026-04-11 01:39:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:29.631259 | orchestrator | 2026-04-11 01:39:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:32.681842 | orchestrator | 2026-04-11 01:39:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:32.682764 | orchestrator | 2026-04-11 01:39:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:32.682806 | orchestrator | 2026-04-11 01:39:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:35.729132 | orchestrator | 2026-04-11 01:39:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:35.729430 | orchestrator | 2026-04-11 01:39:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:35.729470 | orchestrator | 2026-04-11 01:39:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:38.772128 | orchestrator | 2026-04-11 01:39:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:38.775489 | orchestrator | 2026-04-11 01:39:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:38.775568 | orchestrator | 2026-04-11 01:39:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:41.815204 | orchestrator | 2026-04-11 01:39:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:41.816654 | orchestrator | 2026-04-11 01:39:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:41.816732 | orchestrator | 2026-04-11 01:39:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:44.869051 | orchestrator | 2026-04-11 01:39:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:44.871157 | orchestrator | 2026-04-11 01:39:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:44.871263 | orchestrator | 2026-04-11 01:39:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:47.917883 | orchestrator | 2026-04-11 01:39:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:47.919567 | orchestrator | 2026-04-11 01:39:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:47.919612 | orchestrator | 2026-04-11 01:39:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:50.968780 | orchestrator | 2026-04-11 01:39:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:50.969812 | orchestrator | 2026-04-11 01:39:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:50.969851 | orchestrator | 2026-04-11 01:39:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:54.021943 | orchestrator | 2026-04-11 01:39:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:54.024187 | orchestrator | 2026-04-11 01:39:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:54.024309 | orchestrator | 2026-04-11 01:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:39:57.071604 | orchestrator | 2026-04-11 01:39:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:39:57.072915 | orchestrator | 2026-04-11 01:39:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:39:57.073018 | orchestrator | 2026-04-11 01:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:00.120349 | orchestrator | 2026-04-11 01:40:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:00.120447 | orchestrator | 2026-04-11 01:40:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:00.120461 | orchestrator | 2026-04-11 01:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:03.155371 | orchestrator | 2026-04-11 01:40:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:03.156360 | orchestrator | 2026-04-11 01:40:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:03.156389 | orchestrator | 2026-04-11 01:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:06.204346 | orchestrator | 2026-04-11 01:40:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:06.206227 | orchestrator | 2026-04-11 01:40:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:06.206276 | orchestrator | 2026-04-11 01:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:09.256026 | orchestrator | 2026-04-11 01:40:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:09.258187 | orchestrator | 2026-04-11 01:40:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:09.258240 | orchestrator | 2026-04-11 01:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:12.304917 | orchestrator | 2026-04-11 01:40:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:12.305935 | orchestrator | 2026-04-11 01:40:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:12.305986 | orchestrator | 2026-04-11 01:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:15.355427 | orchestrator | 2026-04-11 01:40:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:15.357752 | orchestrator | 2026-04-11 01:40:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:15.357806 | orchestrator | 2026-04-11 01:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:18.407981 | orchestrator | 2026-04-11 01:40:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:18.409603 | orchestrator | 2026-04-11 01:40:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:18.409644 | orchestrator | 2026-04-11 01:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:21.446183 | orchestrator | 2026-04-11 01:40:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:21.447944 | orchestrator | 2026-04-11 01:40:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:21.448010 | orchestrator | 2026-04-11 01:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:24.502297 | orchestrator | 2026-04-11 01:40:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:24.503997 | orchestrator | 2026-04-11 01:40:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:24.504352 | orchestrator | 2026-04-11 01:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:27.556442 | orchestrator | 2026-04-11 01:40:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:27.558692 | orchestrator | 2026-04-11 01:40:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:27.558737 | orchestrator | 2026-04-11 01:40:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:30.604077 | orchestrator | 2026-04-11 01:40:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:30.605565 | orchestrator | 2026-04-11 01:40:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:30.605657 | orchestrator | 2026-04-11 01:40:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:33.657729 | orchestrator | 2026-04-11 01:40:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:33.658937 | orchestrator | 2026-04-11 01:40:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:33.659012 | orchestrator | 2026-04-11 01:40:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:36.711863 | orchestrator | 2026-04-11 01:40:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:36.713687 | orchestrator | 2026-04-11 01:40:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:36.713785 | orchestrator | 2026-04-11 01:40:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:39.763613 | orchestrator | 2026-04-11 01:40:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:39.765827 | orchestrator | 2026-04-11 01:40:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:39.765877 | orchestrator | 2026-04-11 01:40:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:42.819389 | orchestrator | 2026-04-11 01:40:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:42.821817 | orchestrator | 2026-04-11 01:40:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:42.821884 | orchestrator | 2026-04-11 01:40:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:45.875021 | orchestrator | 2026-04-11 01:40:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:45.879034 | orchestrator | 2026-04-11 01:40:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:45.879121 | orchestrator | 2026-04-11 01:40:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:48.923619 | orchestrator | 2026-04-11 01:40:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:48.926318 | orchestrator | 2026-04-11 01:40:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:48.926432 | orchestrator | 2026-04-11 01:40:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:51.981626 | orchestrator | 2026-04-11 01:40:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:51.982926 | orchestrator | 2026-04-11 01:40:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:51.983045 | orchestrator | 2026-04-11 01:40:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:55.034073 | orchestrator | 2026-04-11 01:40:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:55.035381 | orchestrator | 2026-04-11 01:40:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:55.035415 | orchestrator | 2026-04-11 01:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:40:58.082491 | orchestrator | 2026-04-11 01:40:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:40:58.084005 | orchestrator | 2026-04-11 01:40:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:40:58.084063 | orchestrator | 2026-04-11 01:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:01.127578 | orchestrator | 2026-04-11 01:41:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:01.128516 | orchestrator | 2026-04-11 01:41:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:01.128576 | orchestrator | 2026-04-11 01:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:04.176910 | orchestrator | 2026-04-11 01:41:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:04.178718 | orchestrator | 2026-04-11 01:41:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:04.179097 | orchestrator | 2026-04-11 01:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:07.223507 | orchestrator | 2026-04-11 01:41:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:07.224472 | orchestrator | 2026-04-11 01:41:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:07.224655 | orchestrator | 2026-04-11 01:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:10.273896 | orchestrator | 2026-04-11 01:41:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:10.275280 | orchestrator | 2026-04-11 01:41:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:10.275345 | orchestrator | 2026-04-11 01:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:13.319957 | orchestrator | 2026-04-11 01:41:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:13.322102 | orchestrator | 2026-04-11 01:41:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:13.322267 | orchestrator | 2026-04-11 01:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:16.366531 | orchestrator | 2026-04-11 01:41:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:16.367371 | orchestrator | 2026-04-11 01:41:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:16.367433 | orchestrator | 2026-04-11 01:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:19.420570 | orchestrator | 2026-04-11 01:41:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:19.422366 | orchestrator | 2026-04-11 01:41:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:19.422462 | orchestrator | 2026-04-11 01:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:22.468325 | orchestrator | 2026-04-11 01:41:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:22.469438 | orchestrator | 2026-04-11 01:41:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:22.469526 | orchestrator | 2026-04-11 01:41:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:25.519682 | orchestrator | 2026-04-11 01:41:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:25.521573 | orchestrator | 2026-04-11 01:41:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:25.521634 | orchestrator | 2026-04-11 01:41:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:28.564677 | orchestrator | 2026-04-11 01:41:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:28.566942 | orchestrator | 2026-04-11 01:41:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:28.567062 | orchestrator | 2026-04-11 01:41:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:31.618238 | orchestrator | 2026-04-11 01:41:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:31.619457 | orchestrator | 2026-04-11 01:41:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:31.619489 | orchestrator | 2026-04-11 01:41:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:34.656352 | orchestrator | 2026-04-11 01:41:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:34.658788 | orchestrator | 2026-04-11 01:41:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:34.658990 | orchestrator | 2026-04-11 01:41:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:37.707322 | orchestrator | 2026-04-11 01:41:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:37.708854 | orchestrator | 2026-04-11 01:41:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:37.709029 | orchestrator | 2026-04-11 01:41:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:40.756661 | orchestrator | 2026-04-11 01:41:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:40.758742 | orchestrator | 2026-04-11 01:41:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:40.758798 | orchestrator | 2026-04-11 01:41:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:43.804002 | orchestrator | 2026-04-11 01:41:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:43.805560 | orchestrator | 2026-04-11 01:41:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:43.805635 | orchestrator | 2026-04-11 01:41:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:46.855108 | orchestrator | 2026-04-11 01:41:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:46.858151 | orchestrator | 2026-04-11 01:41:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:46.858232 | orchestrator | 2026-04-11 01:41:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:49.907438 | orchestrator | 2026-04-11 01:41:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:49.909481 | orchestrator | 2026-04-11 01:41:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:49.909580 | orchestrator | 2026-04-11 01:41:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:52.957097 | orchestrator | 2026-04-11 01:41:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:52.959383 | orchestrator | 2026-04-11 01:41:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:52.959509 | orchestrator | 2026-04-11 01:41:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:56.013278 | orchestrator | 2026-04-11 01:41:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:56.015076 | orchestrator | 2026-04-11 01:41:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:56.015158 | orchestrator | 2026-04-11 01:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:41:59.065884 | orchestrator | 2026-04-11 01:41:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:41:59.068427 | orchestrator | 2026-04-11 01:41:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:41:59.068536 | orchestrator | 2026-04-11 01:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:02.118633 | orchestrator | 2026-04-11 01:42:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:02.121610 | orchestrator | 2026-04-11 01:42:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:02.121663 | orchestrator | 2026-04-11 01:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:05.166085 | orchestrator | 2026-04-11 01:42:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:05.168892 | orchestrator | 2026-04-11 01:42:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:05.168950 | orchestrator | 2026-04-11 01:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:08.210392 | orchestrator | 2026-04-11 01:42:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:08.211803 | orchestrator | 2026-04-11 01:42:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:08.211841 | orchestrator | 2026-04-11 01:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:11.258247 | orchestrator | 2026-04-11 01:42:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:11.260328 | orchestrator | 2026-04-11 01:42:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:11.260384 | orchestrator | 2026-04-11 01:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:14.308601 | orchestrator | 2026-04-11 01:42:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:14.311485 | orchestrator | 2026-04-11 01:42:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:14.311569 | orchestrator | 2026-04-11 01:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:17.361532 | orchestrator | 2026-04-11 01:42:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:17.362931 | orchestrator | 2026-04-11 01:42:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:17.363022 | orchestrator | 2026-04-11 01:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:20.411727 | orchestrator | 2026-04-11 01:42:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:20.414367 | orchestrator | 2026-04-11 01:42:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:20.414620 | orchestrator | 2026-04-11 01:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:23.467483 | orchestrator | 2026-04-11 01:42:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:23.469420 | orchestrator | 2026-04-11 01:42:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:23.469494 | orchestrator | 2026-04-11 01:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:26.518809 | orchestrator | 2026-04-11 01:42:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:26.520509 | orchestrator | 2026-04-11 01:42:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:26.520563 | orchestrator | 2026-04-11 01:42:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:29.569090 | orchestrator | 2026-04-11 01:42:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:29.570482 | orchestrator | 2026-04-11 01:42:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:29.570557 | orchestrator | 2026-04-11 01:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:32.625875 | orchestrator | 2026-04-11 01:42:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:32.627504 | orchestrator | 2026-04-11 01:42:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:32.627547 | orchestrator | 2026-04-11 01:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:35.677658 | orchestrator | 2026-04-11 01:42:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:35.679968 | orchestrator | 2026-04-11 01:42:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:35.680105 | orchestrator | 2026-04-11 01:42:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:38.727861 | orchestrator | 2026-04-11 01:42:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:38.730541 | orchestrator | 2026-04-11 01:42:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:38.730764 | orchestrator | 2026-04-11 01:42:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:41.775377 | orchestrator | 2026-04-11 01:42:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:41.777578 | orchestrator | 2026-04-11 01:42:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:41.777624 | orchestrator | 2026-04-11 01:42:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:44.824975 | orchestrator | 2026-04-11 01:42:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:44.826414 | orchestrator | 2026-04-11 01:42:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:44.826465 | orchestrator | 2026-04-11 01:42:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:47.871717 | orchestrator | 2026-04-11 01:42:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:47.873728 | orchestrator | 2026-04-11 01:42:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:47.875048 | orchestrator | 2026-04-11 01:42:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:50.919816 | orchestrator | 2026-04-11 01:42:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:50.921757 | orchestrator | 2026-04-11 01:42:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:50.921873 | orchestrator | 2026-04-11 01:42:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:53.971983 | orchestrator | 2026-04-11 01:42:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:53.972107 | orchestrator | 2026-04-11 01:42:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:53.972128 | orchestrator | 2026-04-11 01:42:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:42:57.011198 | orchestrator | 2026-04-11 01:42:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:42:57.012766 | orchestrator | 2026-04-11 01:42:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:42:57.012810 | orchestrator | 2026-04-11 01:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:00.050811 | orchestrator | 2026-04-11 01:43:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:00.053482 | orchestrator | 2026-04-11 01:43:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:00.053669 | orchestrator | 2026-04-11 01:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:03.097577 | orchestrator | 2026-04-11 01:43:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:03.099541 | orchestrator | 2026-04-11 01:43:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:03.099612 | orchestrator | 2026-04-11 01:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:06.148338 | orchestrator | 2026-04-11 01:43:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:06.149886 | orchestrator | 2026-04-11 01:43:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:06.149998 | orchestrator | 2026-04-11 01:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:09.196057 | orchestrator | 2026-04-11 01:43:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:09.198293 | orchestrator | 2026-04-11 01:43:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:09.198599 | orchestrator | 2026-04-11 01:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:12.244472 | orchestrator | 2026-04-11 01:43:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:12.245992 | orchestrator | 2026-04-11 01:43:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:12.246099 | orchestrator | 2026-04-11 01:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:15.291844 | orchestrator | 2026-04-11 01:43:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:15.294497 | orchestrator | 2026-04-11 01:43:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:15.294550 | orchestrator | 2026-04-11 01:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:18.337251 | orchestrator | 2026-04-11 01:43:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:18.339561 | orchestrator | 2026-04-11 01:43:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:18.339673 | orchestrator | 2026-04-11 01:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:21.387308 | orchestrator | 2026-04-11 01:43:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:21.389607 | orchestrator | 2026-04-11 01:43:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:21.389797 | orchestrator | 2026-04-11 01:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:24.433533 | orchestrator | 2026-04-11 01:43:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:24.435552 | orchestrator | 2026-04-11 01:43:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:24.435603 | orchestrator | 2026-04-11 01:43:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:27.480025 | orchestrator | 2026-04-11 01:43:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:27.483544 | orchestrator | 2026-04-11 01:43:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:27.483644 | orchestrator | 2026-04-11 01:43:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:30.526130 | orchestrator | 2026-04-11 01:43:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:30.527789 | orchestrator | 2026-04-11 01:43:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:30.528033 | orchestrator | 2026-04-11 01:43:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:33.573888 | orchestrator | 2026-04-11 01:43:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:33.576943 | orchestrator | 2026-04-11 01:43:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:33.577043 | orchestrator | 2026-04-11 01:43:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:36.620570 | orchestrator | 2026-04-11 01:43:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:36.621411 | orchestrator | 2026-04-11 01:43:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:36.621434 | orchestrator | 2026-04-11 01:43:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:39.663084 | orchestrator | 2026-04-11 01:43:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:39.664487 | orchestrator | 2026-04-11 01:43:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:39.664632 | orchestrator | 2026-04-11 01:43:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:42.708797 | orchestrator | 2026-04-11 01:43:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:42.710228 | orchestrator | 2026-04-11 01:43:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:42.710317 | orchestrator | 2026-04-11 01:43:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:45.754569 | orchestrator | 2026-04-11 01:43:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:45.756538 | orchestrator | 2026-04-11 01:43:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:45.756617 | orchestrator | 2026-04-11 01:43:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:48.800802 | orchestrator | 2026-04-11 01:43:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:48.803034 | orchestrator | 2026-04-11 01:43:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:48.803201 | orchestrator | 2026-04-11 01:43:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:51.851969 | orchestrator | 2026-04-11 01:43:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:51.855236 | orchestrator | 2026-04-11 01:43:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:51.855288 | orchestrator | 2026-04-11 01:43:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:54.900933 | orchestrator | 2026-04-11 01:43:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:54.902554 | orchestrator | 2026-04-11 01:43:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:54.902670 | orchestrator | 2026-04-11 01:43:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:43:57.946371 | orchestrator | 2026-04-11 01:43:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:43:57.948605 | orchestrator | 2026-04-11 01:43:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:43:57.948764 | orchestrator | 2026-04-11 01:43:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:00.990241 | orchestrator | 2026-04-11 01:44:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:00.991998 | orchestrator | 2026-04-11 01:44:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:00.992101 | orchestrator | 2026-04-11 01:44:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:04.029795 | orchestrator | 2026-04-11 01:44:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:04.031402 | orchestrator | 2026-04-11 01:44:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:04.031819 | orchestrator | 2026-04-11 01:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:07.081541 | orchestrator | 2026-04-11 01:44:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:07.084948 | orchestrator | 2026-04-11 01:44:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:07.085012 | orchestrator | 2026-04-11 01:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:10.126946 | orchestrator | 2026-04-11 01:44:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:10.128610 | orchestrator | 2026-04-11 01:44:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:10.128668 | orchestrator | 2026-04-11 01:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:13.171253 | orchestrator | 2026-04-11 01:44:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:13.172882 | orchestrator | 2026-04-11 01:44:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:13.172933 | orchestrator | 2026-04-11 01:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:16.212631 | orchestrator | 2026-04-11 01:44:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:16.214782 | orchestrator | 2026-04-11 01:44:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:16.214877 | orchestrator | 2026-04-11 01:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:19.264056 | orchestrator | 2026-04-11 01:44:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:19.266117 | orchestrator | 2026-04-11 01:44:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:19.266186 | orchestrator | 2026-04-11 01:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:22.313350 | orchestrator | 2026-04-11 01:44:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:22.315761 | orchestrator | 2026-04-11 01:44:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:22.315901 | orchestrator | 2026-04-11 01:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:25.354117 | orchestrator | 2026-04-11 01:44:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:25.355184 | orchestrator | 2026-04-11 01:44:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:25.355243 | orchestrator | 2026-04-11 01:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:28.400352 | orchestrator | 2026-04-11 01:44:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:28.401707 | orchestrator | 2026-04-11 01:44:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:28.401798 | orchestrator | 2026-04-11 01:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:31.448292 | orchestrator | 2026-04-11 01:44:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:31.450425 | orchestrator | 2026-04-11 01:44:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:31.450481 | orchestrator | 2026-04-11 01:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:34.493994 | orchestrator | 2026-04-11 01:44:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:34.495227 | orchestrator | 2026-04-11 01:44:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:34.495413 | orchestrator | 2026-04-11 01:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:37.545373 | orchestrator | 2026-04-11 01:44:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:37.546455 | orchestrator | 2026-04-11 01:44:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:37.546537 | orchestrator | 2026-04-11 01:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:40.597623 | orchestrator | 2026-04-11 01:44:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:40.599326 | orchestrator | 2026-04-11 01:44:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:40.599359 | orchestrator | 2026-04-11 01:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:43.649672 | orchestrator | 2026-04-11 01:44:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:43.650891 | orchestrator | 2026-04-11 01:44:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:43.650944 | orchestrator | 2026-04-11 01:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:46.692631 | orchestrator | 2026-04-11 01:44:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:46.694141 | orchestrator | 2026-04-11 01:44:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:46.694184 | orchestrator | 2026-04-11 01:44:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:49.740426 | orchestrator | 2026-04-11 01:44:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:49.743402 | orchestrator | 2026-04-11 01:44:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:49.743499 | orchestrator | 2026-04-11 01:44:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:52.790984 | orchestrator | 2026-04-11 01:44:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:52.793786 | orchestrator | 2026-04-11 01:44:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:52.793903 | orchestrator | 2026-04-11 01:44:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:55.839248 | orchestrator | 2026-04-11 01:44:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:55.840820 | orchestrator | 2026-04-11 01:44:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:55.840917 | orchestrator | 2026-04-11 01:44:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:44:58.885330 | orchestrator | 2026-04-11 01:44:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:44:58.887984 | orchestrator | 2026-04-11 01:44:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:44:58.888048 | orchestrator | 2026-04-11 01:44:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:01.929865 | orchestrator | 2026-04-11 01:45:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:01.931377 | orchestrator | 2026-04-11 01:45:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:01.931430 | orchestrator | 2026-04-11 01:45:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:04.973071 | orchestrator | 2026-04-11 01:45:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:04.975641 | orchestrator | 2026-04-11 01:45:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:04.975967 | orchestrator | 2026-04-11 01:45:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:08.023868 | orchestrator | 2026-04-11 01:45:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:08.025035 | orchestrator | 2026-04-11 01:45:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:08.025082 | orchestrator | 2026-04-11 01:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:11.074273 | orchestrator | 2026-04-11 01:45:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:11.075776 | orchestrator | 2026-04-11 01:45:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:11.075845 | orchestrator | 2026-04-11 01:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:14.120549 | orchestrator | 2026-04-11 01:45:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:14.122683 | orchestrator | 2026-04-11 01:45:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:14.122782 | orchestrator | 2026-04-11 01:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:17.166093 | orchestrator | 2026-04-11 01:45:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:17.167778 | orchestrator | 2026-04-11 01:45:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:17.167813 | orchestrator | 2026-04-11 01:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:20.215019 | orchestrator | 2026-04-11 01:45:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:20.217302 | orchestrator | 2026-04-11 01:45:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:20.217451 | orchestrator | 2026-04-11 01:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:23.254410 | orchestrator | 2026-04-11 01:45:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:23.255806 | orchestrator | 2026-04-11 01:45:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:23.255852 | orchestrator | 2026-04-11 01:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:26.306205 | orchestrator | 2026-04-11 01:45:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:26.308212 | orchestrator | 2026-04-11 01:45:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:26.308267 | orchestrator | 2026-04-11 01:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:29.356060 | orchestrator | 2026-04-11 01:45:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:29.359611 | orchestrator | 2026-04-11 01:45:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:29.359657 | orchestrator | 2026-04-11 01:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:32.402236 | orchestrator | 2026-04-11 01:45:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:32.405014 | orchestrator | 2026-04-11 01:45:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:32.405125 | orchestrator | 2026-04-11 01:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:35.449949 | orchestrator | 2026-04-11 01:45:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:35.451943 | orchestrator | 2026-04-11 01:45:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:35.451982 | orchestrator | 2026-04-11 01:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:38.496881 | orchestrator | 2026-04-11 01:45:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:38.498319 | orchestrator | 2026-04-11 01:45:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:38.498368 | orchestrator | 2026-04-11 01:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:41.542842 | orchestrator | 2026-04-11 01:45:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:41.544621 | orchestrator | 2026-04-11 01:45:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:41.544919 | orchestrator | 2026-04-11 01:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:44.585942 | orchestrator | 2026-04-11 01:45:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:44.589150 | orchestrator | 2026-04-11 01:45:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:44.589193 | orchestrator | 2026-04-11 01:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:47.634126 | orchestrator | 2026-04-11 01:45:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:47.636942 | orchestrator | 2026-04-11 01:45:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:47.637070 | orchestrator | 2026-04-11 01:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:50.683726 | orchestrator | 2026-04-11 01:45:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:50.687320 | orchestrator | 2026-04-11 01:45:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:50.687384 | orchestrator | 2026-04-11 01:45:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:53.735355 | orchestrator | 2026-04-11 01:45:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:53.739433 | orchestrator | 2026-04-11 01:45:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:53.739493 | orchestrator | 2026-04-11 01:45:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:56.783304 | orchestrator | 2026-04-11 01:45:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:56.784764 | orchestrator | 2026-04-11 01:45:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:56.784837 | orchestrator | 2026-04-11 01:45:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:45:59.833514 | orchestrator | 2026-04-11 01:45:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:45:59.835917 | orchestrator | 2026-04-11 01:45:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:45:59.835970 | orchestrator | 2026-04-11 01:45:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:02.881843 | orchestrator | 2026-04-11 01:46:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:02.883136 | orchestrator | 2026-04-11 01:46:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:02.883193 | orchestrator | 2026-04-11 01:46:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:05.930759 | orchestrator | 2026-04-11 01:46:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:05.933051 | orchestrator | 2026-04-11 01:46:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:05.933127 | orchestrator | 2026-04-11 01:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:08.977760 | orchestrator | 2026-04-11 01:46:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:08.979552 | orchestrator | 2026-04-11 01:46:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:08.979705 | orchestrator | 2026-04-11 01:46:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:12.036427 | orchestrator | 2026-04-11 01:46:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:12.039147 | orchestrator | 2026-04-11 01:46:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:12.039337 | orchestrator | 2026-04-11 01:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:15.084676 | orchestrator | 2026-04-11 01:46:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:15.085922 | orchestrator | 2026-04-11 01:46:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:15.085970 | orchestrator | 2026-04-11 01:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:18.131614 | orchestrator | 2026-04-11 01:46:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:18.133138 | orchestrator | 2026-04-11 01:46:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:18.133236 | orchestrator | 2026-04-11 01:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:21.179890 | orchestrator | 2026-04-11 01:46:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:21.181847 | orchestrator | 2026-04-11 01:46:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:21.181920 | orchestrator | 2026-04-11 01:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:24.227774 | orchestrator | 2026-04-11 01:46:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:24.229211 | orchestrator | 2026-04-11 01:46:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:24.229267 | orchestrator | 2026-04-11 01:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:27.275225 | orchestrator | 2026-04-11 01:46:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:27.276841 | orchestrator | 2026-04-11 01:46:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:27.276886 | orchestrator | 2026-04-11 01:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:30.329014 | orchestrator | 2026-04-11 01:46:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:30.329613 | orchestrator | 2026-04-11 01:46:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:30.330313 | orchestrator | 2026-04-11 01:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:33.374795 | orchestrator | 2026-04-11 01:46:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:33.376110 | orchestrator | 2026-04-11 01:46:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:33.376167 | orchestrator | 2026-04-11 01:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:36.421887 | orchestrator | 2026-04-11 01:46:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:36.424436 | orchestrator | 2026-04-11 01:46:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:36.424531 | orchestrator | 2026-04-11 01:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:39.472554 | orchestrator | 2026-04-11 01:46:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:39.474371 | orchestrator | 2026-04-11 01:46:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:39.474428 | orchestrator | 2026-04-11 01:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:42.518858 | orchestrator | 2026-04-11 01:46:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:42.520156 | orchestrator | 2026-04-11 01:46:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:42.520241 | orchestrator | 2026-04-11 01:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:45.568876 | orchestrator | 2026-04-11 01:46:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:45.570986 | orchestrator | 2026-04-11 01:46:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:45.571192 | orchestrator | 2026-04-11 01:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:48.609547 | orchestrator | 2026-04-11 01:46:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:48.611269 | orchestrator | 2026-04-11 01:46:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:48.611683 | orchestrator | 2026-04-11 01:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:51.661002 | orchestrator | 2026-04-11 01:46:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:51.663660 | orchestrator | 2026-04-11 01:46:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:51.663718 | orchestrator | 2026-04-11 01:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:54.715478 | orchestrator | 2026-04-11 01:46:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:54.715601 | orchestrator | 2026-04-11 01:46:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:54.715628 | orchestrator | 2026-04-11 01:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:46:57.755606 | orchestrator | 2026-04-11 01:46:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:46:57.757783 | orchestrator | 2026-04-11 01:46:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:46:57.757872 | orchestrator | 2026-04-11 01:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:00.805132 | orchestrator | 2026-04-11 01:47:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:00.807471 | orchestrator | 2026-04-11 01:47:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:00.807531 | orchestrator | 2026-04-11 01:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:03.854749 | orchestrator | 2026-04-11 01:47:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:03.856539 | orchestrator | 2026-04-11 01:47:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:03.856580 | orchestrator | 2026-04-11 01:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:06.903497 | orchestrator | 2026-04-11 01:47:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:06.905021 | orchestrator | 2026-04-11 01:47:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:06.905149 | orchestrator | 2026-04-11 01:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:09.947572 | orchestrator | 2026-04-11 01:47:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:09.948841 | orchestrator | 2026-04-11 01:47:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:09.949054 | orchestrator | 2026-04-11 01:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:12.993725 | orchestrator | 2026-04-11 01:47:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:12.995054 | orchestrator | 2026-04-11 01:47:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:12.995097 | orchestrator | 2026-04-11 01:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:16.042997 | orchestrator | 2026-04-11 01:47:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:16.044696 | orchestrator | 2026-04-11 01:47:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:16.044771 | orchestrator | 2026-04-11 01:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:19.094455 | orchestrator | 2026-04-11 01:47:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:19.096119 | orchestrator | 2026-04-11 01:47:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:19.096261 | orchestrator | 2026-04-11 01:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:22.143747 | orchestrator | 2026-04-11 01:47:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:22.145041 | orchestrator | 2026-04-11 01:47:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:22.145217 | orchestrator | 2026-04-11 01:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:25.192101 | orchestrator | 2026-04-11 01:47:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:25.193382 | orchestrator | 2026-04-11 01:47:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:25.193420 | orchestrator | 2026-04-11 01:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:28.235040 | orchestrator | 2026-04-11 01:47:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:28.236821 | orchestrator | 2026-04-11 01:47:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:28.236877 | orchestrator | 2026-04-11 01:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:31.281579 | orchestrator | 2026-04-11 01:47:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:31.283598 | orchestrator | 2026-04-11 01:47:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:31.283783 | orchestrator | 2026-04-11 01:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:34.328727 | orchestrator | 2026-04-11 01:47:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:34.330595 | orchestrator | 2026-04-11 01:47:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:34.330831 | orchestrator | 2026-04-11 01:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:37.374387 | orchestrator | 2026-04-11 01:47:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:37.375881 | orchestrator | 2026-04-11 01:47:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:37.376004 | orchestrator | 2026-04-11 01:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:40.423029 | orchestrator | 2026-04-11 01:47:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:40.424155 | orchestrator | 2026-04-11 01:47:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:40.424189 | orchestrator | 2026-04-11 01:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:43.474800 | orchestrator | 2026-04-11 01:47:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:43.478197 | orchestrator | 2026-04-11 01:47:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:43.478252 | orchestrator | 2026-04-11 01:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:46.528591 | orchestrator | 2026-04-11 01:47:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:46.529573 | orchestrator | 2026-04-11 01:47:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:46.529678 | orchestrator | 2026-04-11 01:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:49.579254 | orchestrator | 2026-04-11 01:47:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:49.580869 | orchestrator | 2026-04-11 01:47:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:49.580917 | orchestrator | 2026-04-11 01:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:52.619777 | orchestrator | 2026-04-11 01:47:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:52.620896 | orchestrator | 2026-04-11 01:47:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:52.620959 | orchestrator | 2026-04-11 01:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:55.666529 | orchestrator | 2026-04-11 01:47:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:55.666706 | orchestrator | 2026-04-11 01:47:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:55.666728 | orchestrator | 2026-04-11 01:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:47:58.714079 | orchestrator | 2026-04-11 01:47:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:47:58.715449 | orchestrator | 2026-04-11 01:47:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:47:58.715957 | orchestrator | 2026-04-11 01:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:01.757097 | orchestrator | 2026-04-11 01:48:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:01.759185 | orchestrator | 2026-04-11 01:48:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:01.759333 | orchestrator | 2026-04-11 01:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:04.802572 | orchestrator | 2026-04-11 01:48:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:04.804066 | orchestrator | 2026-04-11 01:48:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:04.804119 | orchestrator | 2026-04-11 01:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:07.849676 | orchestrator | 2026-04-11 01:48:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:07.851250 | orchestrator | 2026-04-11 01:48:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:07.851410 | orchestrator | 2026-04-11 01:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:10.892701 | orchestrator | 2026-04-11 01:48:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:10.894474 | orchestrator | 2026-04-11 01:48:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:10.894540 | orchestrator | 2026-04-11 01:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:13.937954 | orchestrator | 2026-04-11 01:48:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:13.939416 | orchestrator | 2026-04-11 01:48:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:13.939459 | orchestrator | 2026-04-11 01:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:16.986610 | orchestrator | 2026-04-11 01:48:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:16.988756 | orchestrator | 2026-04-11 01:48:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:16.988852 | orchestrator | 2026-04-11 01:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:20.041354 | orchestrator | 2026-04-11 01:48:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:20.043491 | orchestrator | 2026-04-11 01:48:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:20.043568 | orchestrator | 2026-04-11 01:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:23.084305 | orchestrator | 2026-04-11 01:48:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:23.086120 | orchestrator | 2026-04-11 01:48:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:23.086222 | orchestrator | 2026-04-11 01:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:26.127866 | orchestrator | 2026-04-11 01:48:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:26.129396 | orchestrator | 2026-04-11 01:48:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:26.129535 | orchestrator | 2026-04-11 01:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:29.177867 | orchestrator | 2026-04-11 01:48:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:29.179132 | orchestrator | 2026-04-11 01:48:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:29.179219 | orchestrator | 2026-04-11 01:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:32.229474 | orchestrator | 2026-04-11 01:48:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:32.230480 | orchestrator | 2026-04-11 01:48:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:32.230574 | orchestrator | 2026-04-11 01:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:35.278348 | orchestrator | 2026-04-11 01:48:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:35.279760 | orchestrator | 2026-04-11 01:48:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:35.279806 | orchestrator | 2026-04-11 01:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:38.326796 | orchestrator | 2026-04-11 01:48:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:38.328846 | orchestrator | 2026-04-11 01:48:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:38.328927 | orchestrator | 2026-04-11 01:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:41.378240 | orchestrator | 2026-04-11 01:48:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:41.379739 | orchestrator | 2026-04-11 01:48:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:41.379793 | orchestrator | 2026-04-11 01:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:44.424960 | orchestrator | 2026-04-11 01:48:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:44.426871 | orchestrator | 2026-04-11 01:48:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:44.426952 | orchestrator | 2026-04-11 01:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:47.471314 | orchestrator | 2026-04-11 01:48:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:47.473154 | orchestrator | 2026-04-11 01:48:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:47.473209 | orchestrator | 2026-04-11 01:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:50.512528 | orchestrator | 2026-04-11 01:48:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:50.514204 | orchestrator | 2026-04-11 01:48:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:50.514314 | orchestrator | 2026-04-11 01:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:53.560619 | orchestrator | 2026-04-11 01:48:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:53.564898 | orchestrator | 2026-04-11 01:48:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:53.564953 | orchestrator | 2026-04-11 01:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:56.610305 | orchestrator | 2026-04-11 01:48:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:56.612839 | orchestrator | 2026-04-11 01:48:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:56.612992 | orchestrator | 2026-04-11 01:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:48:59.659344 | orchestrator | 2026-04-11 01:48:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:48:59.661701 | orchestrator | 2026-04-11 01:48:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:48:59.661775 | orchestrator | 2026-04-11 01:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:02.695873 | orchestrator | 2026-04-11 01:49:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:02.696161 | orchestrator | 2026-04-11 01:49:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:02.696344 | orchestrator | 2026-04-11 01:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:05.746518 | orchestrator | 2026-04-11 01:49:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:05.748147 | orchestrator | 2026-04-11 01:49:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:05.748244 | orchestrator | 2026-04-11 01:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:08.799663 | orchestrator | 2026-04-11 01:49:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:08.800854 | orchestrator | 2026-04-11 01:49:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:08.800927 | orchestrator | 2026-04-11 01:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:11.852193 | orchestrator | 2026-04-11 01:49:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:11.854720 | orchestrator | 2026-04-11 01:49:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:11.854798 | orchestrator | 2026-04-11 01:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:14.899546 | orchestrator | 2026-04-11 01:49:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:14.901409 | orchestrator | 2026-04-11 01:49:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:14.901934 | orchestrator | 2026-04-11 01:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:17.945985 | orchestrator | 2026-04-11 01:49:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:17.947095 | orchestrator | 2026-04-11 01:49:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:17.947139 | orchestrator | 2026-04-11 01:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:20.992404 | orchestrator | 2026-04-11 01:49:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:20.994279 | orchestrator | 2026-04-11 01:49:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:20.994481 | orchestrator | 2026-04-11 01:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:24.042287 | orchestrator | 2026-04-11 01:49:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:24.043186 | orchestrator | 2026-04-11 01:49:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:24.043306 | orchestrator | 2026-04-11 01:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:27.086299 | orchestrator | 2026-04-11 01:49:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:49:27.089038 | orchestrator | 2026-04-11 01:49:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:49:27.089123 | orchestrator | 2026-04-11 01:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:49:30.133517 | orchestrator | 2026-04-11 01:49:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:30.242842 | orchestrator | 2026-04-11 01:51:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:30.242928 | orchestrator | 2026-04-11 01:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:33.283913 | orchestrator | 2026-04-11 01:51:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:33.285464 | orchestrator | 2026-04-11 01:51:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:33.285555 | orchestrator | 2026-04-11 01:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:36.328269 | orchestrator | 2026-04-11 01:51:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:36.331084 | orchestrator | 2026-04-11 01:51:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:36.331147 | orchestrator | 2026-04-11 01:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:39.374607 | orchestrator | 2026-04-11 01:51:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:39.376349 | orchestrator | 2026-04-11 01:51:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:39.376400 | orchestrator | 2026-04-11 01:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:42.422089 | orchestrator | 2026-04-11 01:51:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:42.424418 | orchestrator | 2026-04-11 01:51:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:42.425352 | orchestrator | 2026-04-11 01:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:45.470223 | orchestrator | 2026-04-11 01:51:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:45.472221 | orchestrator | 2026-04-11 01:51:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:45.472343 | orchestrator | 2026-04-11 01:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:48.510868 | orchestrator | 2026-04-11 01:51:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:48.513339 | orchestrator | 2026-04-11 01:51:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:48.513394 | orchestrator | 2026-04-11 01:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:51.556251 | orchestrator | 2026-04-11 01:51:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:51.558903 | orchestrator | 2026-04-11 01:51:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:51.559117 | orchestrator | 2026-04-11 01:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:54.600887 | orchestrator | 2026-04-11 01:51:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:54.601682 | orchestrator | 2026-04-11 01:51:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:54.601795 | orchestrator | 2026-04-11 01:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:51:57.642007 | orchestrator | 2026-04-11 01:51:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:51:57.643061 | orchestrator | 2026-04-11 01:51:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:51:57.643082 | orchestrator | 2026-04-11 01:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:00.684496 | orchestrator | 2026-04-11 01:52:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:00.685723 | orchestrator | 2026-04-11 01:52:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:00.685794 | orchestrator | 2026-04-11 01:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:03.728166 | orchestrator | 2026-04-11 01:52:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:03.729646 | orchestrator | 2026-04-11 01:52:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:03.729785 | orchestrator | 2026-04-11 01:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:06.772132 | orchestrator | 2026-04-11 01:52:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:06.773208 | orchestrator | 2026-04-11 01:52:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:06.773433 | orchestrator | 2026-04-11 01:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:09.816128 | orchestrator | 2026-04-11 01:52:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:09.816709 | orchestrator | 2026-04-11 01:52:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:09.816747 | orchestrator | 2026-04-11 01:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:12.860937 | orchestrator | 2026-04-11 01:52:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:12.863221 | orchestrator | 2026-04-11 01:52:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:12.863281 | orchestrator | 2026-04-11 01:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:15.904825 | orchestrator | 2026-04-11 01:52:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:15.906819 | orchestrator | 2026-04-11 01:52:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:15.907116 | orchestrator | 2026-04-11 01:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:18.950497 | orchestrator | 2026-04-11 01:52:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:18.952195 | orchestrator | 2026-04-11 01:52:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:18.952253 | orchestrator | 2026-04-11 01:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:21.994577 | orchestrator | 2026-04-11 01:52:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:21.997002 | orchestrator | 2026-04-11 01:52:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:21.997056 | orchestrator | 2026-04-11 01:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:25.037894 | orchestrator | 2026-04-11 01:52:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:25.039577 | orchestrator | 2026-04-11 01:52:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:25.039618 | orchestrator | 2026-04-11 01:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:28.082156 | orchestrator | 2026-04-11 01:52:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:28.084555 | orchestrator | 2026-04-11 01:52:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:28.084763 | orchestrator | 2026-04-11 01:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:31.123233 | orchestrator | 2026-04-11 01:52:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:31.123446 | orchestrator | 2026-04-11 01:52:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:31.123464 | orchestrator | 2026-04-11 01:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:34.169439 | orchestrator | 2026-04-11 01:52:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:34.171368 | orchestrator | 2026-04-11 01:52:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:34.171523 | orchestrator | 2026-04-11 01:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:37.213459 | orchestrator | 2026-04-11 01:52:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:37.215424 | orchestrator | 2026-04-11 01:52:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:37.215482 | orchestrator | 2026-04-11 01:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:40.261539 | orchestrator | 2026-04-11 01:52:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:40.263806 | orchestrator | 2026-04-11 01:52:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:40.263992 | orchestrator | 2026-04-11 01:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:43.309063 | orchestrator | 2026-04-11 01:52:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:43.309945 | orchestrator | 2026-04-11 01:52:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:43.310095 | orchestrator | 2026-04-11 01:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:46.353592 | orchestrator | 2026-04-11 01:52:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:46.354701 | orchestrator | 2026-04-11 01:52:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:46.354747 | orchestrator | 2026-04-11 01:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:49.397724 | orchestrator | 2026-04-11 01:52:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:49.399746 | orchestrator | 2026-04-11 01:52:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:49.399792 | orchestrator | 2026-04-11 01:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:52.446190 | orchestrator | 2026-04-11 01:52:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:52.448190 | orchestrator | 2026-04-11 01:52:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:52.448626 | orchestrator | 2026-04-11 01:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:55.492757 | orchestrator | 2026-04-11 01:52:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:55.494738 | orchestrator | 2026-04-11 01:52:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:55.494798 | orchestrator | 2026-04-11 01:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:52:58.541055 | orchestrator | 2026-04-11 01:52:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:52:58.542366 | orchestrator | 2026-04-11 01:52:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:52:58.542490 | orchestrator | 2026-04-11 01:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:01.584796 | orchestrator | 2026-04-11 01:53:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:01.587326 | orchestrator | 2026-04-11 01:53:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:01.587651 | orchestrator | 2026-04-11 01:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:04.634728 | orchestrator | 2026-04-11 01:53:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:04.636747 | orchestrator | 2026-04-11 01:53:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:04.636899 | orchestrator | 2026-04-11 01:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:07.678878 | orchestrator | 2026-04-11 01:53:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:07.680008 | orchestrator | 2026-04-11 01:53:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:07.680085 | orchestrator | 2026-04-11 01:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:10.724797 | orchestrator | 2026-04-11 01:53:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:10.727348 | orchestrator | 2026-04-11 01:53:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:10.727433 | orchestrator | 2026-04-11 01:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:13.770773 | orchestrator | 2026-04-11 01:53:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:13.772458 | orchestrator | 2026-04-11 01:53:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:13.772600 | orchestrator | 2026-04-11 01:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:16.818821 | orchestrator | 2026-04-11 01:53:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:16.821325 | orchestrator | 2026-04-11 01:53:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:16.821626 | orchestrator | 2026-04-11 01:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:19.862870 | orchestrator | 2026-04-11 01:53:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:19.864682 | orchestrator | 2026-04-11 01:53:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:19.864759 | orchestrator | 2026-04-11 01:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:22.908436 | orchestrator | 2026-04-11 01:53:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:22.910083 | orchestrator | 2026-04-11 01:53:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:22.910133 | orchestrator | 2026-04-11 01:53:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:25.957664 | orchestrator | 2026-04-11 01:53:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:25.957970 | orchestrator | 2026-04-11 01:53:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:25.958096 | orchestrator | 2026-04-11 01:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:29.003687 | orchestrator | 2026-04-11 01:53:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:29.005380 | orchestrator | 2026-04-11 01:53:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:29.005438 | orchestrator | 2026-04-11 01:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:32.049784 | orchestrator | 2026-04-11 01:53:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:32.052428 | orchestrator | 2026-04-11 01:53:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:32.052589 | orchestrator | 2026-04-11 01:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:35.092093 | orchestrator | 2026-04-11 01:53:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:35.094432 | orchestrator | 2026-04-11 01:53:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:35.094494 | orchestrator | 2026-04-11 01:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:38.137128 | orchestrator | 2026-04-11 01:53:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:38.138948 | orchestrator | 2026-04-11 01:53:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:38.139000 | orchestrator | 2026-04-11 01:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:41.185343 | orchestrator | 2026-04-11 01:53:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:41.186690 | orchestrator | 2026-04-11 01:53:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:41.186853 | orchestrator | 2026-04-11 01:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:44.229805 | orchestrator | 2026-04-11 01:53:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:44.231131 | orchestrator | 2026-04-11 01:53:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:44.231192 | orchestrator | 2026-04-11 01:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:47.268677 | orchestrator | 2026-04-11 01:53:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:47.270812 | orchestrator | 2026-04-11 01:53:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:47.270886 | orchestrator | 2026-04-11 01:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:50.314673 | orchestrator | 2026-04-11 01:53:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:50.317010 | orchestrator | 2026-04-11 01:53:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:50.317608 | orchestrator | 2026-04-11 01:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:53.359653 | orchestrator | 2026-04-11 01:53:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:53.362444 | orchestrator | 2026-04-11 01:53:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:53.362544 | orchestrator | 2026-04-11 01:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:56.411798 | orchestrator | 2026-04-11 01:53:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:56.412936 | orchestrator | 2026-04-11 01:53:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:56.412980 | orchestrator | 2026-04-11 01:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:53:59.458821 | orchestrator | 2026-04-11 01:53:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:53:59.460439 | orchestrator | 2026-04-11 01:53:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:53:59.460685 | orchestrator | 2026-04-11 01:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:02.501888 | orchestrator | 2026-04-11 01:54:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:02.503921 | orchestrator | 2026-04-11 01:54:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:02.503991 | orchestrator | 2026-04-11 01:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:05.545096 | orchestrator | 2026-04-11 01:54:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:05.547940 | orchestrator | 2026-04-11 01:54:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:05.548007 | orchestrator | 2026-04-11 01:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:08.589893 | orchestrator | 2026-04-11 01:54:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:08.591391 | orchestrator | 2026-04-11 01:54:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:08.591407 | orchestrator | 2026-04-11 01:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:11.633557 | orchestrator | 2026-04-11 01:54:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:11.635462 | orchestrator | 2026-04-11 01:54:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:11.635518 | orchestrator | 2026-04-11 01:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:14.678696 | orchestrator | 2026-04-11 01:54:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:14.680783 | orchestrator | 2026-04-11 01:54:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:14.680851 | orchestrator | 2026-04-11 01:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:17.721884 | orchestrator | 2026-04-11 01:54:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:17.722760 | orchestrator | 2026-04-11 01:54:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:17.722853 | orchestrator | 2026-04-11 01:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:20.767178 | orchestrator | 2026-04-11 01:54:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:20.768935 | orchestrator | 2026-04-11 01:54:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:20.769016 | orchestrator | 2026-04-11 01:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:23.811518 | orchestrator | 2026-04-11 01:54:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:23.813935 | orchestrator | 2026-04-11 01:54:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:23.814178 | orchestrator | 2026-04-11 01:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:26.856728 | orchestrator | 2026-04-11 01:54:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:26.858083 | orchestrator | 2026-04-11 01:54:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:26.858142 | orchestrator | 2026-04-11 01:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:29.902205 | orchestrator | 2026-04-11 01:54:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:29.904044 | orchestrator | 2026-04-11 01:54:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:29.904091 | orchestrator | 2026-04-11 01:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:32.946695 | orchestrator | 2026-04-11 01:54:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:32.948400 | orchestrator | 2026-04-11 01:54:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:32.948505 | orchestrator | 2026-04-11 01:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:35.987926 | orchestrator | 2026-04-11 01:54:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:35.989429 | orchestrator | 2026-04-11 01:54:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:35.989465 | orchestrator | 2026-04-11 01:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:39.034472 | orchestrator | 2026-04-11 01:54:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:39.036349 | orchestrator | 2026-04-11 01:54:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:39.036408 | orchestrator | 2026-04-11 01:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:42.073774 | orchestrator | 2026-04-11 01:54:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:42.074914 | orchestrator | 2026-04-11 01:54:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:42.075068 | orchestrator | 2026-04-11 01:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:45.116826 | orchestrator | 2026-04-11 01:54:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:45.119334 | orchestrator | 2026-04-11 01:54:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:45.119497 | orchestrator | 2026-04-11 01:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:48.161905 | orchestrator | 2026-04-11 01:54:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:48.163368 | orchestrator | 2026-04-11 01:54:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:48.163592 | orchestrator | 2026-04-11 01:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:51.208302 | orchestrator | 2026-04-11 01:54:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:51.210607 | orchestrator | 2026-04-11 01:54:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:51.210758 | orchestrator | 2026-04-11 01:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:54.253611 | orchestrator | 2026-04-11 01:54:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:54.255385 | orchestrator | 2026-04-11 01:54:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:54.255713 | orchestrator | 2026-04-11 01:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:54:57.298892 | orchestrator | 2026-04-11 01:54:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:54:57.301027 | orchestrator | 2026-04-11 01:54:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:54:57.301692 | orchestrator | 2026-04-11 01:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:00.344206 | orchestrator | 2026-04-11 01:55:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:00.345501 | orchestrator | 2026-04-11 01:55:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:00.345536 | orchestrator | 2026-04-11 01:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:03.383176 | orchestrator | 2026-04-11 01:55:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:03.385122 | orchestrator | 2026-04-11 01:55:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:03.385271 | orchestrator | 2026-04-11 01:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:06.427232 | orchestrator | 2026-04-11 01:55:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:06.429259 | orchestrator | 2026-04-11 01:55:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:06.429839 | orchestrator | 2026-04-11 01:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:09.470285 | orchestrator | 2026-04-11 01:55:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:09.471727 | orchestrator | 2026-04-11 01:55:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:09.471778 | orchestrator | 2026-04-11 01:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:12.516219 | orchestrator | 2026-04-11 01:55:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:12.517574 | orchestrator | 2026-04-11 01:55:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:12.517643 | orchestrator | 2026-04-11 01:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:15.560163 | orchestrator | 2026-04-11 01:55:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:15.561171 | orchestrator | 2026-04-11 01:55:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:15.561210 | orchestrator | 2026-04-11 01:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:18.600502 | orchestrator | 2026-04-11 01:55:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:18.601699 | orchestrator | 2026-04-11 01:55:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:18.601746 | orchestrator | 2026-04-11 01:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:21.646739 | orchestrator | 2026-04-11 01:55:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:21.650183 | orchestrator | 2026-04-11 01:55:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:21.650253 | orchestrator | 2026-04-11 01:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:24.694164 | orchestrator | 2026-04-11 01:55:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:24.695791 | orchestrator | 2026-04-11 01:55:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:24.695857 | orchestrator | 2026-04-11 01:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:27.743762 | orchestrator | 2026-04-11 01:55:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:27.745979 | orchestrator | 2026-04-11 01:55:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:27.746117 | orchestrator | 2026-04-11 01:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:30.789012 | orchestrator | 2026-04-11 01:55:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:30.791812 | orchestrator | 2026-04-11 01:55:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:30.791968 | orchestrator | 2026-04-11 01:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:33.843922 | orchestrator | 2026-04-11 01:55:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:33.845659 | orchestrator | 2026-04-11 01:55:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:33.845721 | orchestrator | 2026-04-11 01:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:36.888540 | orchestrator | 2026-04-11 01:55:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:36.890187 | orchestrator | 2026-04-11 01:55:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:36.890266 | orchestrator | 2026-04-11 01:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:39.932033 | orchestrator | 2026-04-11 01:55:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:39.933126 | orchestrator | 2026-04-11 01:55:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:39.933182 | orchestrator | 2026-04-11 01:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:42.974673 | orchestrator | 2026-04-11 01:55:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:42.977164 | orchestrator | 2026-04-11 01:55:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:42.977259 | orchestrator | 2026-04-11 01:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:46.023175 | orchestrator | 2026-04-11 01:55:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:46.024682 | orchestrator | 2026-04-11 01:55:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:46.024741 | orchestrator | 2026-04-11 01:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:49.064991 | orchestrator | 2026-04-11 01:55:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:49.066588 | orchestrator | 2026-04-11 01:55:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:49.066638 | orchestrator | 2026-04-11 01:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:52.112300 | orchestrator | 2026-04-11 01:55:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:52.114291 | orchestrator | 2026-04-11 01:55:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:52.114401 | orchestrator | 2026-04-11 01:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:55.155951 | orchestrator | 2026-04-11 01:55:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:55.156823 | orchestrator | 2026-04-11 01:55:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:55.156895 | orchestrator | 2026-04-11 01:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:55:58.201291 | orchestrator | 2026-04-11 01:55:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:55:58.202766 | orchestrator | 2026-04-11 01:55:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:55:58.202816 | orchestrator | 2026-04-11 01:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:01.246551 | orchestrator | 2026-04-11 01:56:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:01.247642 | orchestrator | 2026-04-11 01:56:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:01.247711 | orchestrator | 2026-04-11 01:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:04.295011 | orchestrator | 2026-04-11 01:56:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:04.299209 | orchestrator | 2026-04-11 01:56:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:04.299286 | orchestrator | 2026-04-11 01:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:07.347874 | orchestrator | 2026-04-11 01:56:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:07.348846 | orchestrator | 2026-04-11 01:56:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:07.348878 | orchestrator | 2026-04-11 01:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:10.395812 | orchestrator | 2026-04-11 01:56:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:10.396837 | orchestrator | 2026-04-11 01:56:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:10.396935 | orchestrator | 2026-04-11 01:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:13.445923 | orchestrator | 2026-04-11 01:56:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:13.447822 | orchestrator | 2026-04-11 01:56:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:13.447885 | orchestrator | 2026-04-11 01:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:16.492189 | orchestrator | 2026-04-11 01:56:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:16.493372 | orchestrator | 2026-04-11 01:56:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:16.493416 | orchestrator | 2026-04-11 01:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:19.544012 | orchestrator | 2026-04-11 01:56:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:19.546244 | orchestrator | 2026-04-11 01:56:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:19.546335 | orchestrator | 2026-04-11 01:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:22.591883 | orchestrator | 2026-04-11 01:56:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:22.593292 | orchestrator | 2026-04-11 01:56:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:22.593403 | orchestrator | 2026-04-11 01:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:25.639166 | orchestrator | 2026-04-11 01:56:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:25.640512 | orchestrator | 2026-04-11 01:56:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:25.640651 | orchestrator | 2026-04-11 01:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:28.695134 | orchestrator | 2026-04-11 01:56:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:28.699298 | orchestrator | 2026-04-11 01:56:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:28.699409 | orchestrator | 2026-04-11 01:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:31.745100 | orchestrator | 2026-04-11 01:56:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:31.746699 | orchestrator | 2026-04-11 01:56:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:31.746759 | orchestrator | 2026-04-11 01:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:34.791637 | orchestrator | 2026-04-11 01:56:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:34.793180 | orchestrator | 2026-04-11 01:56:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:34.793234 | orchestrator | 2026-04-11 01:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:37.841750 | orchestrator | 2026-04-11 01:56:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:37.843816 | orchestrator | 2026-04-11 01:56:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:37.843908 | orchestrator | 2026-04-11 01:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:40.889190 | orchestrator | 2026-04-11 01:56:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:40.889666 | orchestrator | 2026-04-11 01:56:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:40.889787 | orchestrator | 2026-04-11 01:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:43.935719 | orchestrator | 2026-04-11 01:56:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:43.937165 | orchestrator | 2026-04-11 01:56:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:43.937348 | orchestrator | 2026-04-11 01:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:46.985837 | orchestrator | 2026-04-11 01:56:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:46.987256 | orchestrator | 2026-04-11 01:56:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:46.987299 | orchestrator | 2026-04-11 01:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:50.030107 | orchestrator | 2026-04-11 01:56:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:50.031099 | orchestrator | 2026-04-11 01:56:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:50.031246 | orchestrator | 2026-04-11 01:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:53.076571 | orchestrator | 2026-04-11 01:56:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:53.079460 | orchestrator | 2026-04-11 01:56:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:53.079535 | orchestrator | 2026-04-11 01:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:56.128317 | orchestrator | 2026-04-11 01:56:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:56.129845 | orchestrator | 2026-04-11 01:56:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:56.129874 | orchestrator | 2026-04-11 01:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:56:59.179222 | orchestrator | 2026-04-11 01:56:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:56:59.180544 | orchestrator | 2026-04-11 01:56:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:56:59.180676 | orchestrator | 2026-04-11 01:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:02.230956 | orchestrator | 2026-04-11 01:57:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:02.233407 | orchestrator | 2026-04-11 01:57:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:02.233477 | orchestrator | 2026-04-11 01:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:05.276353 | orchestrator | 2026-04-11 01:57:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:05.277537 | orchestrator | 2026-04-11 01:57:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:05.277889 | orchestrator | 2026-04-11 01:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:08.324426 | orchestrator | 2026-04-11 01:57:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:08.326904 | orchestrator | 2026-04-11 01:57:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:08.326973 | orchestrator | 2026-04-11 01:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:11.376597 | orchestrator | 2026-04-11 01:57:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:11.378004 | orchestrator | 2026-04-11 01:57:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:11.378130 | orchestrator | 2026-04-11 01:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:14.429031 | orchestrator | 2026-04-11 01:57:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:14.430277 | orchestrator | 2026-04-11 01:57:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:14.430332 | orchestrator | 2026-04-11 01:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:17.477052 | orchestrator | 2026-04-11 01:57:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:17.478580 | orchestrator | 2026-04-11 01:57:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:17.478732 | orchestrator | 2026-04-11 01:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:20.523270 | orchestrator | 2026-04-11 01:57:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:20.527245 | orchestrator | 2026-04-11 01:57:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:20.527336 | orchestrator | 2026-04-11 01:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:23.571883 | orchestrator | 2026-04-11 01:57:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:23.574256 | orchestrator | 2026-04-11 01:57:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:23.574340 | orchestrator | 2026-04-11 01:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:26.620813 | orchestrator | 2026-04-11 01:57:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:26.622278 | orchestrator | 2026-04-11 01:57:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:26.622328 | orchestrator | 2026-04-11 01:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:29.673227 | orchestrator | 2026-04-11 01:57:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:29.675353 | orchestrator | 2026-04-11 01:57:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:29.675385 | orchestrator | 2026-04-11 01:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:32.717101 | orchestrator | 2026-04-11 01:57:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:32.718783 | orchestrator | 2026-04-11 01:57:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:32.718844 | orchestrator | 2026-04-11 01:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:35.763420 | orchestrator | 2026-04-11 01:57:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:35.765096 | orchestrator | 2026-04-11 01:57:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:35.766067 | orchestrator | 2026-04-11 01:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:38.811921 | orchestrator | 2026-04-11 01:57:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:38.814626 | orchestrator | 2026-04-11 01:57:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:38.814780 | orchestrator | 2026-04-11 01:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:41.859456 | orchestrator | 2026-04-11 01:57:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:41.860581 | orchestrator | 2026-04-11 01:57:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:41.860816 | orchestrator | 2026-04-11 01:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:44.905425 | orchestrator | 2026-04-11 01:57:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:44.907584 | orchestrator | 2026-04-11 01:57:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:44.907659 | orchestrator | 2026-04-11 01:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:47.949468 | orchestrator | 2026-04-11 01:57:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:47.951612 | orchestrator | 2026-04-11 01:57:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:47.951724 | orchestrator | 2026-04-11 01:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:51.005940 | orchestrator | 2026-04-11 01:57:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:51.006758 | orchestrator | 2026-04-11 01:57:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:51.007269 | orchestrator | 2026-04-11 01:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:54.053428 | orchestrator | 2026-04-11 01:57:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:54.055896 | orchestrator | 2026-04-11 01:57:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:54.055974 | orchestrator | 2026-04-11 01:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:57:57.107424 | orchestrator | 2026-04-11 01:57:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:57:57.110995 | orchestrator | 2026-04-11 01:57:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:57:57.111069 | orchestrator | 2026-04-11 01:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:00.156802 | orchestrator | 2026-04-11 01:58:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:00.159857 | orchestrator | 2026-04-11 01:58:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:00.159945 | orchestrator | 2026-04-11 01:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:03.205476 | orchestrator | 2026-04-11 01:58:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:03.206309 | orchestrator | 2026-04-11 01:58:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:03.206723 | orchestrator | 2026-04-11 01:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:06.252271 | orchestrator | 2026-04-11 01:58:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:06.254220 | orchestrator | 2026-04-11 01:58:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:06.254260 | orchestrator | 2026-04-11 01:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:09.303593 | orchestrator | 2026-04-11 01:58:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:09.305321 | orchestrator | 2026-04-11 01:58:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:09.305374 | orchestrator | 2026-04-11 01:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:12.355609 | orchestrator | 2026-04-11 01:58:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:12.357425 | orchestrator | 2026-04-11 01:58:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:12.357525 | orchestrator | 2026-04-11 01:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:15.403300 | orchestrator | 2026-04-11 01:58:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:15.406504 | orchestrator | 2026-04-11 01:58:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:15.406588 | orchestrator | 2026-04-11 01:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:18.451930 | orchestrator | 2026-04-11 01:58:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:18.452791 | orchestrator | 2026-04-11 01:58:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:18.452851 | orchestrator | 2026-04-11 01:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:21.498906 | orchestrator | 2026-04-11 01:58:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:21.500677 | orchestrator | 2026-04-11 01:58:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:21.500763 | orchestrator | 2026-04-11 01:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:24.547687 | orchestrator | 2026-04-11 01:58:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:24.549007 | orchestrator | 2026-04-11 01:58:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:24.549045 | orchestrator | 2026-04-11 01:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:27.595047 | orchestrator | 2026-04-11 01:58:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:27.596715 | orchestrator | 2026-04-11 01:58:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:27.596819 | orchestrator | 2026-04-11 01:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:30.641153 | orchestrator | 2026-04-11 01:58:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:30.643197 | orchestrator | 2026-04-11 01:58:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:30.643387 | orchestrator | 2026-04-11 01:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:33.692157 | orchestrator | 2026-04-11 01:58:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:33.694228 | orchestrator | 2026-04-11 01:58:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:33.694501 | orchestrator | 2026-04-11 01:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:36.737979 | orchestrator | 2026-04-11 01:58:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:36.739875 | orchestrator | 2026-04-11 01:58:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:36.739915 | orchestrator | 2026-04-11 01:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:39.782300 | orchestrator | 2026-04-11 01:58:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:39.783999 | orchestrator | 2026-04-11 01:58:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:39.784210 | orchestrator | 2026-04-11 01:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:42.827910 | orchestrator | 2026-04-11 01:58:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:42.830219 | orchestrator | 2026-04-11 01:58:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:42.830288 | orchestrator | 2026-04-11 01:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:45.882280 | orchestrator | 2026-04-11 01:58:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:45.884070 | orchestrator | 2026-04-11 01:58:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:45.884157 | orchestrator | 2026-04-11 01:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:48.936604 | orchestrator | 2026-04-11 01:58:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:48.938555 | orchestrator | 2026-04-11 01:58:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:48.938663 | orchestrator | 2026-04-11 01:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:51.983140 | orchestrator | 2026-04-11 01:58:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:51.985169 | orchestrator | 2026-04-11 01:58:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:51.985229 | orchestrator | 2026-04-11 01:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:55.032977 | orchestrator | 2026-04-11 01:58:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:55.034803 | orchestrator | 2026-04-11 01:58:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:55.034899 | orchestrator | 2026-04-11 01:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:58:58.081223 | orchestrator | 2026-04-11 01:58:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:58:58.083185 | orchestrator | 2026-04-11 01:58:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:58:58.083244 | orchestrator | 2026-04-11 01:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:01.127922 | orchestrator | 2026-04-11 01:59:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:01.130215 | orchestrator | 2026-04-11 01:59:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:01.130263 | orchestrator | 2026-04-11 01:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:04.177308 | orchestrator | 2026-04-11 01:59:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:04.179766 | orchestrator | 2026-04-11 01:59:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:04.179872 | orchestrator | 2026-04-11 01:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:07.222985 | orchestrator | 2026-04-11 01:59:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:07.224180 | orchestrator | 2026-04-11 01:59:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:07.224388 | orchestrator | 2026-04-11 01:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:10.270660 | orchestrator | 2026-04-11 01:59:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:10.272208 | orchestrator | 2026-04-11 01:59:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:10.272266 | orchestrator | 2026-04-11 01:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:13.318425 | orchestrator | 2026-04-11 01:59:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:13.320024 | orchestrator | 2026-04-11 01:59:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:13.320098 | orchestrator | 2026-04-11 01:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:16.368212 | orchestrator | 2026-04-11 01:59:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:16.368957 | orchestrator | 2026-04-11 01:59:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:16.369021 | orchestrator | 2026-04-11 01:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:19.419968 | orchestrator | 2026-04-11 01:59:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:19.421416 | orchestrator | 2026-04-11 01:59:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:19.421516 | orchestrator | 2026-04-11 01:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:22.467591 | orchestrator | 2026-04-11 01:59:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:22.469150 | orchestrator | 2026-04-11 01:59:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:22.469190 | orchestrator | 2026-04-11 01:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:25.514648 | orchestrator | 2026-04-11 01:59:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:25.517078 | orchestrator | 2026-04-11 01:59:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:25.517146 | orchestrator | 2026-04-11 01:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:28.563094 | orchestrator | 2026-04-11 01:59:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:28.565083 | orchestrator | 2026-04-11 01:59:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:28.565236 | orchestrator | 2026-04-11 01:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:31.608204 | orchestrator | 2026-04-11 01:59:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:31.609729 | orchestrator | 2026-04-11 01:59:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:31.609795 | orchestrator | 2026-04-11 01:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:34.654200 | orchestrator | 2026-04-11 01:59:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:34.656598 | orchestrator | 2026-04-11 01:59:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:34.656684 | orchestrator | 2026-04-11 01:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:37.699667 | orchestrator | 2026-04-11 01:59:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:37.703799 | orchestrator | 2026-04-11 01:59:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:37.703928 | orchestrator | 2026-04-11 01:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:40.749818 | orchestrator | 2026-04-11 01:59:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:40.753139 | orchestrator | 2026-04-11 01:59:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:40.753240 | orchestrator | 2026-04-11 01:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:43.800356 | orchestrator | 2026-04-11 01:59:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:43.801703 | orchestrator | 2026-04-11 01:59:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:43.801884 | orchestrator | 2026-04-11 01:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:46.849712 | orchestrator | 2026-04-11 01:59:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:46.851128 | orchestrator | 2026-04-11 01:59:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:46.851166 | orchestrator | 2026-04-11 01:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:49.895627 | orchestrator | 2026-04-11 01:59:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:49.896782 | orchestrator | 2026-04-11 01:59:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:49.896820 | orchestrator | 2026-04-11 01:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:52.944481 | orchestrator | 2026-04-11 01:59:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:52.946476 | orchestrator | 2026-04-11 01:59:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:52.946568 | orchestrator | 2026-04-11 01:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:55.988083 | orchestrator | 2026-04-11 01:59:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:55.988619 | orchestrator | 2026-04-11 01:59:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:55.988657 | orchestrator | 2026-04-11 01:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 01:59:59.029365 | orchestrator | 2026-04-11 01:59:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 01:59:59.030459 | orchestrator | 2026-04-11 01:59:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 01:59:59.030508 | orchestrator | 2026-04-11 01:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:02.075202 | orchestrator | 2026-04-11 02:00:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:02.076907 | orchestrator | 2026-04-11 02:00:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:02.076970 | orchestrator | 2026-04-11 02:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:05.120913 | orchestrator | 2026-04-11 02:00:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:05.121528 | orchestrator | 2026-04-11 02:00:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:05.121552 | orchestrator | 2026-04-11 02:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:08.170781 | orchestrator | 2026-04-11 02:00:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:08.172775 | orchestrator | 2026-04-11 02:00:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:08.172894 | orchestrator | 2026-04-11 02:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:11.225431 | orchestrator | 2026-04-11 02:00:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:11.226923 | orchestrator | 2026-04-11 02:00:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:11.227081 | orchestrator | 2026-04-11 02:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:14.271347 | orchestrator | 2026-04-11 02:00:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:14.273865 | orchestrator | 2026-04-11 02:00:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:14.274199 | orchestrator | 2026-04-11 02:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:17.317860 | orchestrator | 2026-04-11 02:00:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:17.319333 | orchestrator | 2026-04-11 02:00:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:17.319422 | orchestrator | 2026-04-11 02:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:20.361589 | orchestrator | 2026-04-11 02:00:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:20.363501 | orchestrator | 2026-04-11 02:00:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:20.363602 | orchestrator | 2026-04-11 02:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:23.409681 | orchestrator | 2026-04-11 02:00:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:23.411171 | orchestrator | 2026-04-11 02:00:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:23.411251 | orchestrator | 2026-04-11 02:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:26.460402 | orchestrator | 2026-04-11 02:00:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:26.461866 | orchestrator | 2026-04-11 02:00:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:26.461936 | orchestrator | 2026-04-11 02:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:29.509636 | orchestrator | 2026-04-11 02:00:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:29.511007 | orchestrator | 2026-04-11 02:00:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:29.511442 | orchestrator | 2026-04-11 02:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:32.557148 | orchestrator | 2026-04-11 02:00:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:32.558754 | orchestrator | 2026-04-11 02:00:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:32.558858 | orchestrator | 2026-04-11 02:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:35.609119 | orchestrator | 2026-04-11 02:00:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:35.611478 | orchestrator | 2026-04-11 02:00:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:35.611625 | orchestrator | 2026-04-11 02:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:38.658475 | orchestrator | 2026-04-11 02:00:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:38.660108 | orchestrator | 2026-04-11 02:00:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:38.660174 | orchestrator | 2026-04-11 02:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:41.709984 | orchestrator | 2026-04-11 02:00:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:41.711322 | orchestrator | 2026-04-11 02:00:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:41.711439 | orchestrator | 2026-04-11 02:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:44.758622 | orchestrator | 2026-04-11 02:00:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:44.759902 | orchestrator | 2026-04-11 02:00:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:44.760159 | orchestrator | 2026-04-11 02:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:47.809613 | orchestrator | 2026-04-11 02:00:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:47.811487 | orchestrator | 2026-04-11 02:00:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:47.811558 | orchestrator | 2026-04-11 02:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:50.855729 | orchestrator | 2026-04-11 02:00:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:50.856872 | orchestrator | 2026-04-11 02:00:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:50.856980 | orchestrator | 2026-04-11 02:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:53.906341 | orchestrator | 2026-04-11 02:00:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:53.907638 | orchestrator | 2026-04-11 02:00:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:53.907717 | orchestrator | 2026-04-11 02:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:00:56.956031 | orchestrator | 2026-04-11 02:00:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:00:56.958648 | orchestrator | 2026-04-11 02:00:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:00:56.958725 | orchestrator | 2026-04-11 02:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:00.006239 | orchestrator | 2026-04-11 02:01:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:00.008100 | orchestrator | 2026-04-11 02:01:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:00.008143 | orchestrator | 2026-04-11 02:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:03.061478 | orchestrator | 2026-04-11 02:01:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:03.063163 | orchestrator | 2026-04-11 02:01:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:03.063246 | orchestrator | 2026-04-11 02:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:06.115719 | orchestrator | 2026-04-11 02:01:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:06.117412 | orchestrator | 2026-04-11 02:01:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:06.117444 | orchestrator | 2026-04-11 02:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:09.165626 | orchestrator | 2026-04-11 02:01:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:09.167494 | orchestrator | 2026-04-11 02:01:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:09.167551 | orchestrator | 2026-04-11 02:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:12.214617 | orchestrator | 2026-04-11 02:01:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:12.216425 | orchestrator | 2026-04-11 02:01:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:12.216573 | orchestrator | 2026-04-11 02:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:15.258774 | orchestrator | 2026-04-11 02:01:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:15.260269 | orchestrator | 2026-04-11 02:01:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:15.260305 | orchestrator | 2026-04-11 02:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:18.305904 | orchestrator | 2026-04-11 02:01:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:18.307509 | orchestrator | 2026-04-11 02:01:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:18.307618 | orchestrator | 2026-04-11 02:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:21.348344 | orchestrator | 2026-04-11 02:01:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:21.351295 | orchestrator | 2026-04-11 02:01:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:21.351484 | orchestrator | 2026-04-11 02:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:24.396178 | orchestrator | 2026-04-11 02:01:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:24.397484 | orchestrator | 2026-04-11 02:01:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:24.397596 | orchestrator | 2026-04-11 02:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:27.436512 | orchestrator | 2026-04-11 02:01:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:27.438252 | orchestrator | 2026-04-11 02:01:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:27.438339 | orchestrator | 2026-04-11 02:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:30.487758 | orchestrator | 2026-04-11 02:01:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:30.489461 | orchestrator | 2026-04-11 02:01:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:30.489566 | orchestrator | 2026-04-11 02:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:33.539440 | orchestrator | 2026-04-11 02:01:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:33.540790 | orchestrator | 2026-04-11 02:01:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:33.540825 | orchestrator | 2026-04-11 02:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:36.592760 | orchestrator | 2026-04-11 02:01:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:36.594240 | orchestrator | 2026-04-11 02:01:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:36.594304 | orchestrator | 2026-04-11 02:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:39.640198 | orchestrator | 2026-04-11 02:01:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:39.642433 | orchestrator | 2026-04-11 02:01:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:39.642524 | orchestrator | 2026-04-11 02:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:42.693209 | orchestrator | 2026-04-11 02:01:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:42.694969 | orchestrator | 2026-04-11 02:01:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:42.695076 | orchestrator | 2026-04-11 02:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:45.747046 | orchestrator | 2026-04-11 02:01:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:45.748333 | orchestrator | 2026-04-11 02:01:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:45.748398 | orchestrator | 2026-04-11 02:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:48.794969 | orchestrator | 2026-04-11 02:01:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:48.796819 | orchestrator | 2026-04-11 02:01:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:48.796867 | orchestrator | 2026-04-11 02:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:51.846208 | orchestrator | 2026-04-11 02:01:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:51.848118 | orchestrator | 2026-04-11 02:01:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:51.848164 | orchestrator | 2026-04-11 02:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:54.898462 | orchestrator | 2026-04-11 02:01:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:54.898553 | orchestrator | 2026-04-11 02:01:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:54.898560 | orchestrator | 2026-04-11 02:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:01:57.949280 | orchestrator | 2026-04-11 02:01:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:01:57.951780 | orchestrator | 2026-04-11 02:01:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:01:57.951837 | orchestrator | 2026-04-11 02:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:00.996625 | orchestrator | 2026-04-11 02:02:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:00.998441 | orchestrator | 2026-04-11 02:02:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:00.998846 | orchestrator | 2026-04-11 02:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:04.044070 | orchestrator | 2026-04-11 02:02:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:04.045788 | orchestrator | 2026-04-11 02:02:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:04.045852 | orchestrator | 2026-04-11 02:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:07.093698 | orchestrator | 2026-04-11 02:02:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:07.096137 | orchestrator | 2026-04-11 02:02:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:07.096218 | orchestrator | 2026-04-11 02:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:10.138857 | orchestrator | 2026-04-11 02:02:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:10.140001 | orchestrator | 2026-04-11 02:02:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:10.140044 | orchestrator | 2026-04-11 02:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:13.189078 | orchestrator | 2026-04-11 02:02:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:13.191101 | orchestrator | 2026-04-11 02:02:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:13.191246 | orchestrator | 2026-04-11 02:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:16.241303 | orchestrator | 2026-04-11 02:02:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:16.242982 | orchestrator | 2026-04-11 02:02:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:16.243044 | orchestrator | 2026-04-11 02:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:19.284096 | orchestrator | 2026-04-11 02:02:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:19.285339 | orchestrator | 2026-04-11 02:02:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:19.285465 | orchestrator | 2026-04-11 02:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:22.332944 | orchestrator | 2026-04-11 02:02:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:22.334333 | orchestrator | 2026-04-11 02:02:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:22.334398 | orchestrator | 2026-04-11 02:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:25.379897 | orchestrator | 2026-04-11 02:02:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:25.381805 | orchestrator | 2026-04-11 02:02:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:25.381896 | orchestrator | 2026-04-11 02:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:28.420732 | orchestrator | 2026-04-11 02:02:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:28.423201 | orchestrator | 2026-04-11 02:02:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:28.423444 | orchestrator | 2026-04-11 02:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:31.462139 | orchestrator | 2026-04-11 02:02:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:31.463525 | orchestrator | 2026-04-11 02:02:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:31.463600 | orchestrator | 2026-04-11 02:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:34.512424 | orchestrator | 2026-04-11 02:02:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:34.514347 | orchestrator | 2026-04-11 02:02:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:34.514408 | orchestrator | 2026-04-11 02:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:37.565203 | orchestrator | 2026-04-11 02:02:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:37.566446 | orchestrator | 2026-04-11 02:02:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:37.566583 | orchestrator | 2026-04-11 02:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:40.612525 | orchestrator | 2026-04-11 02:02:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:40.614511 | orchestrator | 2026-04-11 02:02:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:40.614567 | orchestrator | 2026-04-11 02:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:43.659548 | orchestrator | 2026-04-11 02:02:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:43.661895 | orchestrator | 2026-04-11 02:02:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:43.661954 | orchestrator | 2026-04-11 02:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:46.704977 | orchestrator | 2026-04-11 02:02:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:46.706396 | orchestrator | 2026-04-11 02:02:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:46.706447 | orchestrator | 2026-04-11 02:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:49.748505 | orchestrator | 2026-04-11 02:02:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:49.749259 | orchestrator | 2026-04-11 02:02:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:49.749289 | orchestrator | 2026-04-11 02:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:52.793895 | orchestrator | 2026-04-11 02:02:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:52.795025 | orchestrator | 2026-04-11 02:02:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:52.795094 | orchestrator | 2026-04-11 02:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:55.842125 | orchestrator | 2026-04-11 02:02:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:55.842963 | orchestrator | 2026-04-11 02:02:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:55.843138 | orchestrator | 2026-04-11 02:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:02:58.885739 | orchestrator | 2026-04-11 02:02:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:02:58.886122 | orchestrator | 2026-04-11 02:02:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:02:58.886162 | orchestrator | 2026-04-11 02:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:01.929660 | orchestrator | 2026-04-11 02:03:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:01.931115 | orchestrator | 2026-04-11 02:03:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:01.931189 | orchestrator | 2026-04-11 02:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:04.973279 | orchestrator | 2026-04-11 02:03:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:04.975379 | orchestrator | 2026-04-11 02:03:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:04.975587 | orchestrator | 2026-04-11 02:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:08.022242 | orchestrator | 2026-04-11 02:03:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:08.024701 | orchestrator | 2026-04-11 02:03:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:08.024781 | orchestrator | 2026-04-11 02:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:11.075269 | orchestrator | 2026-04-11 02:03:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:11.077169 | orchestrator | 2026-04-11 02:03:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:11.077211 | orchestrator | 2026-04-11 02:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:14.124890 | orchestrator | 2026-04-11 02:03:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:14.127873 | orchestrator | 2026-04-11 02:03:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:14.127978 | orchestrator | 2026-04-11 02:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:17.177649 | orchestrator | 2026-04-11 02:03:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:17.181101 | orchestrator | 2026-04-11 02:03:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:17.181152 | orchestrator | 2026-04-11 02:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:20.228237 | orchestrator | 2026-04-11 02:03:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:20.229326 | orchestrator | 2026-04-11 02:03:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:20.229433 | orchestrator | 2026-04-11 02:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:23.276818 | orchestrator | 2026-04-11 02:03:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:23.278498 | orchestrator | 2026-04-11 02:03:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:23.278538 | orchestrator | 2026-04-11 02:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:26.322838 | orchestrator | 2026-04-11 02:03:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:26.325210 | orchestrator | 2026-04-11 02:03:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:26.325275 | orchestrator | 2026-04-11 02:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:29.379848 | orchestrator | 2026-04-11 02:03:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:29.382006 | orchestrator | 2026-04-11 02:03:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:29.382276 | orchestrator | 2026-04-11 02:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:32.432828 | orchestrator | 2026-04-11 02:03:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:32.434965 | orchestrator | 2026-04-11 02:03:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:32.435016 | orchestrator | 2026-04-11 02:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:35.478455 | orchestrator | 2026-04-11 02:03:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:35.480566 | orchestrator | 2026-04-11 02:03:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:35.480897 | orchestrator | 2026-04-11 02:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:38.522313 | orchestrator | 2026-04-11 02:03:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:38.523663 | orchestrator | 2026-04-11 02:03:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:38.523748 | orchestrator | 2026-04-11 02:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:41.572362 | orchestrator | 2026-04-11 02:03:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:41.574511 | orchestrator | 2026-04-11 02:03:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:41.574610 | orchestrator | 2026-04-11 02:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:44.618718 | orchestrator | 2026-04-11 02:03:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:44.620428 | orchestrator | 2026-04-11 02:03:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:44.620478 | orchestrator | 2026-04-11 02:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:47.664561 | orchestrator | 2026-04-11 02:03:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:47.666529 | orchestrator | 2026-04-11 02:03:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:47.666670 | orchestrator | 2026-04-11 02:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:50.710918 | orchestrator | 2026-04-11 02:03:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:50.713251 | orchestrator | 2026-04-11 02:03:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:50.713350 | orchestrator | 2026-04-11 02:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:53.760190 | orchestrator | 2026-04-11 02:03:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:53.761882 | orchestrator | 2026-04-11 02:03:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:53.761956 | orchestrator | 2026-04-11 02:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:56.807016 | orchestrator | 2026-04-11 02:03:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:56.808788 | orchestrator | 2026-04-11 02:03:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:56.808871 | orchestrator | 2026-04-11 02:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:03:59.849444 | orchestrator | 2026-04-11 02:03:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:03:59.849644 | orchestrator | 2026-04-11 02:03:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:03:59.849665 | orchestrator | 2026-04-11 02:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:02.894055 | orchestrator | 2026-04-11 02:04:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:02.896271 | orchestrator | 2026-04-11 02:04:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:02.896407 | orchestrator | 2026-04-11 02:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:05.943527 | orchestrator | 2026-04-11 02:04:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:05.945598 | orchestrator | 2026-04-11 02:04:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:05.945674 | orchestrator | 2026-04-11 02:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:08.993072 | orchestrator | 2026-04-11 02:04:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:08.994453 | orchestrator | 2026-04-11 02:04:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:08.994680 | orchestrator | 2026-04-11 02:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:12.040971 | orchestrator | 2026-04-11 02:04:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:12.043542 | orchestrator | 2026-04-11 02:04:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:12.043618 | orchestrator | 2026-04-11 02:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:15.094312 | orchestrator | 2026-04-11 02:04:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:15.095905 | orchestrator | 2026-04-11 02:04:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:15.096051 | orchestrator | 2026-04-11 02:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:18.144348 | orchestrator | 2026-04-11 02:04:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:18.145329 | orchestrator | 2026-04-11 02:04:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:18.145379 | orchestrator | 2026-04-11 02:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:21.192216 | orchestrator | 2026-04-11 02:04:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:21.194474 | orchestrator | 2026-04-11 02:04:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:21.194543 | orchestrator | 2026-04-11 02:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:24.243387 | orchestrator | 2026-04-11 02:04:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:24.245811 | orchestrator | 2026-04-11 02:04:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:24.245965 | orchestrator | 2026-04-11 02:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:27.293183 | orchestrator | 2026-04-11 02:04:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:27.295643 | orchestrator | 2026-04-11 02:04:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:27.295771 | orchestrator | 2026-04-11 02:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:30.343671 | orchestrator | 2026-04-11 02:04:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:30.344295 | orchestrator | 2026-04-11 02:04:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:30.344485 | orchestrator | 2026-04-11 02:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:33.386433 | orchestrator | 2026-04-11 02:04:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:33.388896 | orchestrator | 2026-04-11 02:04:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:33.389370 | orchestrator | 2026-04-11 02:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:36.441764 | orchestrator | 2026-04-11 02:04:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:36.443751 | orchestrator | 2026-04-11 02:04:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:36.444533 | orchestrator | 2026-04-11 02:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:39.497447 | orchestrator | 2026-04-11 02:04:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:39.499294 | orchestrator | 2026-04-11 02:04:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:39.499352 | orchestrator | 2026-04-11 02:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:42.549941 | orchestrator | 2026-04-11 02:04:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:42.551355 | orchestrator | 2026-04-11 02:04:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:42.551398 | orchestrator | 2026-04-11 02:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:45.607789 | orchestrator | 2026-04-11 02:04:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:45.611065 | orchestrator | 2026-04-11 02:04:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:45.611162 | orchestrator | 2026-04-11 02:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:48.672686 | orchestrator | 2026-04-11 02:04:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:48.674696 | orchestrator | 2026-04-11 02:04:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:48.674761 | orchestrator | 2026-04-11 02:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:51.723788 | orchestrator | 2026-04-11 02:04:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:51.727268 | orchestrator | 2026-04-11 02:04:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:51.727532 | orchestrator | 2026-04-11 02:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:54.775515 | orchestrator | 2026-04-11 02:04:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:54.777490 | orchestrator | 2026-04-11 02:04:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:54.777555 | orchestrator | 2026-04-11 02:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:04:57.827724 | orchestrator | 2026-04-11 02:04:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:04:57.828695 | orchestrator | 2026-04-11 02:04:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:04:57.828735 | orchestrator | 2026-04-11 02:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:00.871497 | orchestrator | 2026-04-11 02:05:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:00.874399 | orchestrator | 2026-04-11 02:05:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:00.874461 | orchestrator | 2026-04-11 02:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:03.920187 | orchestrator | 2026-04-11 02:05:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:03.922794 | orchestrator | 2026-04-11 02:05:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:03.922830 | orchestrator | 2026-04-11 02:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:06.974783 | orchestrator | 2026-04-11 02:05:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:06.977391 | orchestrator | 2026-04-11 02:05:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:06.977491 | orchestrator | 2026-04-11 02:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:10.024755 | orchestrator | 2026-04-11 02:05:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:10.026255 | orchestrator | 2026-04-11 02:05:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:10.026686 | orchestrator | 2026-04-11 02:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:13.072908 | orchestrator | 2026-04-11 02:05:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:13.073990 | orchestrator | 2026-04-11 02:05:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:13.074267 | orchestrator | 2026-04-11 02:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:16.129677 | orchestrator | 2026-04-11 02:05:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:16.132112 | orchestrator | 2026-04-11 02:05:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:16.132158 | orchestrator | 2026-04-11 02:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:19.183164 | orchestrator | 2026-04-11 02:05:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:19.185337 | orchestrator | 2026-04-11 02:05:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:19.186273 | orchestrator | 2026-04-11 02:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:22.235073 | orchestrator | 2026-04-11 02:05:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:22.235838 | orchestrator | 2026-04-11 02:05:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:22.235885 | orchestrator | 2026-04-11 02:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:25.282677 | orchestrator | 2026-04-11 02:05:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:25.284124 | orchestrator | 2026-04-11 02:05:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:25.284172 | orchestrator | 2026-04-11 02:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:28.331550 | orchestrator | 2026-04-11 02:05:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:28.332577 | orchestrator | 2026-04-11 02:05:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:28.332611 | orchestrator | 2026-04-11 02:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:31.386372 | orchestrator | 2026-04-11 02:05:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:31.389137 | orchestrator | 2026-04-11 02:05:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:31.390013 | orchestrator | 2026-04-11 02:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:34.447010 | orchestrator | 2026-04-11 02:05:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:34.448164 | orchestrator | 2026-04-11 02:05:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:34.448289 | orchestrator | 2026-04-11 02:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:37.502215 | orchestrator | 2026-04-11 02:05:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:37.503909 | orchestrator | 2026-04-11 02:05:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:37.503953 | orchestrator | 2026-04-11 02:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:40.546772 | orchestrator | 2026-04-11 02:05:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:40.548416 | orchestrator | 2026-04-11 02:05:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:40.548473 | orchestrator | 2026-04-11 02:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:43.595663 | orchestrator | 2026-04-11 02:05:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:43.598340 | orchestrator | 2026-04-11 02:05:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:43.598404 | orchestrator | 2026-04-11 02:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:46.641497 | orchestrator | 2026-04-11 02:05:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:46.643357 | orchestrator | 2026-04-11 02:05:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:46.643423 | orchestrator | 2026-04-11 02:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:49.686947 | orchestrator | 2026-04-11 02:05:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:49.688871 | orchestrator | 2026-04-11 02:05:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:49.689114 | orchestrator | 2026-04-11 02:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:52.740556 | orchestrator | 2026-04-11 02:05:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:52.741733 | orchestrator | 2026-04-11 02:05:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:52.741783 | orchestrator | 2026-04-11 02:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:55.792732 | orchestrator | 2026-04-11 02:05:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:55.796500 | orchestrator | 2026-04-11 02:05:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:55.796602 | orchestrator | 2026-04-11 02:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:05:58.849948 | orchestrator | 2026-04-11 02:05:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:05:58.852000 | orchestrator | 2026-04-11 02:05:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:05:58.852068 | orchestrator | 2026-04-11 02:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:01.900024 | orchestrator | 2026-04-11 02:06:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:01.901281 | orchestrator | 2026-04-11 02:06:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:01.901334 | orchestrator | 2026-04-11 02:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:04.947856 | orchestrator | 2026-04-11 02:06:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:04.950593 | orchestrator | 2026-04-11 02:06:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:04.950715 | orchestrator | 2026-04-11 02:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:08.005674 | orchestrator | 2026-04-11 02:06:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:08.007332 | orchestrator | 2026-04-11 02:06:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:08.007395 | orchestrator | 2026-04-11 02:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:11.057452 | orchestrator | 2026-04-11 02:06:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:11.057942 | orchestrator | 2026-04-11 02:06:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:11.057989 | orchestrator | 2026-04-11 02:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:14.111883 | orchestrator | 2026-04-11 02:06:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:14.115240 | orchestrator | 2026-04-11 02:06:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:14.115484 | orchestrator | 2026-04-11 02:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:17.166456 | orchestrator | 2026-04-11 02:06:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:17.168553 | orchestrator | 2026-04-11 02:06:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:17.168638 | orchestrator | 2026-04-11 02:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:20.218093 | orchestrator | 2026-04-11 02:06:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:20.219064 | orchestrator | 2026-04-11 02:06:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:20.219121 | orchestrator | 2026-04-11 02:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:23.276851 | orchestrator | 2026-04-11 02:06:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:23.279937 | orchestrator | 2026-04-11 02:06:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:23.280015 | orchestrator | 2026-04-11 02:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:26.332361 | orchestrator | 2026-04-11 02:06:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:26.334737 | orchestrator | 2026-04-11 02:06:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:26.334789 | orchestrator | 2026-04-11 02:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:29.376706 | orchestrator | 2026-04-11 02:06:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:29.378340 | orchestrator | 2026-04-11 02:06:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:29.378444 | orchestrator | 2026-04-11 02:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:32.423720 | orchestrator | 2026-04-11 02:06:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:32.426470 | orchestrator | 2026-04-11 02:06:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:32.426551 | orchestrator | 2026-04-11 02:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:35.475555 | orchestrator | 2026-04-11 02:06:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:35.477724 | orchestrator | 2026-04-11 02:06:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:35.477813 | orchestrator | 2026-04-11 02:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:38.524811 | orchestrator | 2026-04-11 02:06:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:38.526887 | orchestrator | 2026-04-11 02:06:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:38.527075 | orchestrator | 2026-04-11 02:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:41.574267 | orchestrator | 2026-04-11 02:06:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:41.576044 | orchestrator | 2026-04-11 02:06:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:41.576127 | orchestrator | 2026-04-11 02:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:44.627233 | orchestrator | 2026-04-11 02:06:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:44.629738 | orchestrator | 2026-04-11 02:06:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:44.629907 | orchestrator | 2026-04-11 02:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:47.679600 | orchestrator | 2026-04-11 02:06:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:47.681098 | orchestrator | 2026-04-11 02:06:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:47.681150 | orchestrator | 2026-04-11 02:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:50.730523 | orchestrator | 2026-04-11 02:06:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:50.732010 | orchestrator | 2026-04-11 02:06:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:50.732063 | orchestrator | 2026-04-11 02:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:53.782576 | orchestrator | 2026-04-11 02:06:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:53.784127 | orchestrator | 2026-04-11 02:06:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:53.784466 | orchestrator | 2026-04-11 02:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:56.830515 | orchestrator | 2026-04-11 02:06:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:56.832015 | orchestrator | 2026-04-11 02:06:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:56.832068 | orchestrator | 2026-04-11 02:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:06:59.872300 | orchestrator | 2026-04-11 02:06:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:06:59.873959 | orchestrator | 2026-04-11 02:06:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:06:59.874007 | orchestrator | 2026-04-11 02:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:02.924221 | orchestrator | 2026-04-11 02:07:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:02.926931 | orchestrator | 2026-04-11 02:07:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:02.927002 | orchestrator | 2026-04-11 02:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:05.975793 | orchestrator | 2026-04-11 02:07:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:05.978531 | orchestrator | 2026-04-11 02:07:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:05.978592 | orchestrator | 2026-04-11 02:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:09.030742 | orchestrator | 2026-04-11 02:07:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:09.033481 | orchestrator | 2026-04-11 02:07:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:09.033570 | orchestrator | 2026-04-11 02:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:12.082525 | orchestrator | 2026-04-11 02:07:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:12.083930 | orchestrator | 2026-04-11 02:07:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:12.083989 | orchestrator | 2026-04-11 02:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:15.131476 | orchestrator | 2026-04-11 02:07:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:15.133780 | orchestrator | 2026-04-11 02:07:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:15.133826 | orchestrator | 2026-04-11 02:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:18.179692 | orchestrator | 2026-04-11 02:07:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:18.180489 | orchestrator | 2026-04-11 02:07:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:18.180591 | orchestrator | 2026-04-11 02:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:21.237288 | orchestrator | 2026-04-11 02:07:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:21.239232 | orchestrator | 2026-04-11 02:07:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:21.239318 | orchestrator | 2026-04-11 02:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:24.289709 | orchestrator | 2026-04-11 02:07:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:24.291548 | orchestrator | 2026-04-11 02:07:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:24.291609 | orchestrator | 2026-04-11 02:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:27.340839 | orchestrator | 2026-04-11 02:07:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:27.341586 | orchestrator | 2026-04-11 02:07:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:27.341630 | orchestrator | 2026-04-11 02:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:30.390813 | orchestrator | 2026-04-11 02:07:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:30.392028 | orchestrator | 2026-04-11 02:07:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:30.392074 | orchestrator | 2026-04-11 02:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:33.433755 | orchestrator | 2026-04-11 02:07:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:33.435815 | orchestrator | 2026-04-11 02:07:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:33.436284 | orchestrator | 2026-04-11 02:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:36.486486 | orchestrator | 2026-04-11 02:07:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:36.488529 | orchestrator | 2026-04-11 02:07:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:36.488582 | orchestrator | 2026-04-11 02:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:39.535332 | orchestrator | 2026-04-11 02:07:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:39.536992 | orchestrator | 2026-04-11 02:07:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:39.537131 | orchestrator | 2026-04-11 02:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:42.587503 | orchestrator | 2026-04-11 02:07:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:42.589343 | orchestrator | 2026-04-11 02:07:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:42.589448 | orchestrator | 2026-04-11 02:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:45.634943 | orchestrator | 2026-04-11 02:07:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:45.637583 | orchestrator | 2026-04-11 02:07:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:45.638237 | orchestrator | 2026-04-11 02:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:48.680081 | orchestrator | 2026-04-11 02:07:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:48.681782 | orchestrator | 2026-04-11 02:07:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:48.681868 | orchestrator | 2026-04-11 02:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:51.731651 | orchestrator | 2026-04-11 02:07:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:51.733581 | orchestrator | 2026-04-11 02:07:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:51.733691 | orchestrator | 2026-04-11 02:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:54.777679 | orchestrator | 2026-04-11 02:07:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:54.778916 | orchestrator | 2026-04-11 02:07:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:54.778972 | orchestrator | 2026-04-11 02:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:07:57.828403 | orchestrator | 2026-04-11 02:07:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:07:57.830723 | orchestrator | 2026-04-11 02:07:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:07:57.830768 | orchestrator | 2026-04-11 02:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:00.882723 | orchestrator | 2026-04-11 02:08:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:00.883606 | orchestrator | 2026-04-11 02:08:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:00.883645 | orchestrator | 2026-04-11 02:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:03.943093 | orchestrator | 2026-04-11 02:08:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:03.945795 | orchestrator | 2026-04-11 02:08:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:03.945852 | orchestrator | 2026-04-11 02:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:06.999818 | orchestrator | 2026-04-11 02:08:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:07.001880 | orchestrator | 2026-04-11 02:08:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:07.001962 | orchestrator | 2026-04-11 02:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:10.056174 | orchestrator | 2026-04-11 02:08:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:10.059506 | orchestrator | 2026-04-11 02:08:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:10.059644 | orchestrator | 2026-04-11 02:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:13.105907 | orchestrator | 2026-04-11 02:08:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:13.106846 | orchestrator | 2026-04-11 02:08:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:13.106903 | orchestrator | 2026-04-11 02:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:16.157940 | orchestrator | 2026-04-11 02:08:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:16.159569 | orchestrator | 2026-04-11 02:08:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:16.159603 | orchestrator | 2026-04-11 02:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:19.204961 | orchestrator | 2026-04-11 02:08:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:19.207006 | orchestrator | 2026-04-11 02:08:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:19.207092 | orchestrator | 2026-04-11 02:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:22.254894 | orchestrator | 2026-04-11 02:08:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:22.257008 | orchestrator | 2026-04-11 02:08:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:22.257081 | orchestrator | 2026-04-11 02:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:25.305692 | orchestrator | 2026-04-11 02:08:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:25.307193 | orchestrator | 2026-04-11 02:08:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:25.307271 | orchestrator | 2026-04-11 02:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:28.358710 | orchestrator | 2026-04-11 02:08:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:28.360812 | orchestrator | 2026-04-11 02:08:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:28.360899 | orchestrator | 2026-04-11 02:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:31.406846 | orchestrator | 2026-04-11 02:08:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:31.409169 | orchestrator | 2026-04-11 02:08:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:31.409248 | orchestrator | 2026-04-11 02:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:34.457888 | orchestrator | 2026-04-11 02:08:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:34.458666 | orchestrator | 2026-04-11 02:08:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:34.458723 | orchestrator | 2026-04-11 02:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:37.501392 | orchestrator | 2026-04-11 02:08:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:37.503283 | orchestrator | 2026-04-11 02:08:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:37.503349 | orchestrator | 2026-04-11 02:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:40.551152 | orchestrator | 2026-04-11 02:08:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:40.551601 | orchestrator | 2026-04-11 02:08:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:40.551632 | orchestrator | 2026-04-11 02:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:43.598076 | orchestrator | 2026-04-11 02:08:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:43.600074 | orchestrator | 2026-04-11 02:08:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:43.600198 | orchestrator | 2026-04-11 02:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:46.648508 | orchestrator | 2026-04-11 02:08:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:46.650251 | orchestrator | 2026-04-11 02:08:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:46.650331 | orchestrator | 2026-04-11 02:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:49.703291 | orchestrator | 2026-04-11 02:08:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:49.705131 | orchestrator | 2026-04-11 02:08:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:49.705187 | orchestrator | 2026-04-11 02:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:52.760312 | orchestrator | 2026-04-11 02:08:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:52.761602 | orchestrator | 2026-04-11 02:08:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:52.761648 | orchestrator | 2026-04-11 02:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:55.805803 | orchestrator | 2026-04-11 02:08:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:55.808621 | orchestrator | 2026-04-11 02:08:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:55.808777 | orchestrator | 2026-04-11 02:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:08:58.857572 | orchestrator | 2026-04-11 02:08:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:08:58.858619 | orchestrator | 2026-04-11 02:08:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:08:58.858659 | orchestrator | 2026-04-11 02:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:01.918634 | orchestrator | 2026-04-11 02:09:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:01.920761 | orchestrator | 2026-04-11 02:09:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:01.920928 | orchestrator | 2026-04-11 02:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:04.966233 | orchestrator | 2026-04-11 02:09:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:04.968466 | orchestrator | 2026-04-11 02:09:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:04.968529 | orchestrator | 2026-04-11 02:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:08.020255 | orchestrator | 2026-04-11 02:09:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:08.022308 | orchestrator | 2026-04-11 02:09:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:08.022637 | orchestrator | 2026-04-11 02:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:11.065773 | orchestrator | 2026-04-11 02:09:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:11.065884 | orchestrator | 2026-04-11 02:09:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:11.065901 | orchestrator | 2026-04-11 02:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:14.117700 | orchestrator | 2026-04-11 02:09:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:14.121609 | orchestrator | 2026-04-11 02:09:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:14.121684 | orchestrator | 2026-04-11 02:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:17.169800 | orchestrator | 2026-04-11 02:09:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:17.171889 | orchestrator | 2026-04-11 02:09:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:17.171944 | orchestrator | 2026-04-11 02:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:20.222137 | orchestrator | 2026-04-11 02:09:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:20.222820 | orchestrator | 2026-04-11 02:09:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:20.222931 | orchestrator | 2026-04-11 02:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:23.277087 | orchestrator | 2026-04-11 02:09:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:23.279196 | orchestrator | 2026-04-11 02:09:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:23.279357 | orchestrator | 2026-04-11 02:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:26.328961 | orchestrator | 2026-04-11 02:09:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:26.331094 | orchestrator | 2026-04-11 02:09:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:26.331151 | orchestrator | 2026-04-11 02:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:29.382376 | orchestrator | 2026-04-11 02:09:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:29.383870 | orchestrator | 2026-04-11 02:09:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:29.383934 | orchestrator | 2026-04-11 02:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:32.431758 | orchestrator | 2026-04-11 02:09:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:32.434354 | orchestrator | 2026-04-11 02:09:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:32.434435 | orchestrator | 2026-04-11 02:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:35.477925 | orchestrator | 2026-04-11 02:09:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:35.480667 | orchestrator | 2026-04-11 02:09:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:35.480787 | orchestrator | 2026-04-11 02:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:38.529215 | orchestrator | 2026-04-11 02:09:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:38.531788 | orchestrator | 2026-04-11 02:09:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:38.531979 | orchestrator | 2026-04-11 02:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:41.583544 | orchestrator | 2026-04-11 02:09:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:41.585962 | orchestrator | 2026-04-11 02:09:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:41.586414 | orchestrator | 2026-04-11 02:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:44.630789 | orchestrator | 2026-04-11 02:09:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:44.633294 | orchestrator | 2026-04-11 02:09:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:44.633409 | orchestrator | 2026-04-11 02:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:47.689027 | orchestrator | 2026-04-11 02:09:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:47.691354 | orchestrator | 2026-04-11 02:09:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:47.691433 | orchestrator | 2026-04-11 02:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:50.733852 | orchestrator | 2026-04-11 02:09:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:50.736126 | orchestrator | 2026-04-11 02:09:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:50.736282 | orchestrator | 2026-04-11 02:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:53.783410 | orchestrator | 2026-04-11 02:09:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:53.785567 | orchestrator | 2026-04-11 02:09:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:53.785706 | orchestrator | 2026-04-11 02:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:56.834926 | orchestrator | 2026-04-11 02:09:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:56.835939 | orchestrator | 2026-04-11 02:09:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:56.836061 | orchestrator | 2026-04-11 02:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:09:59.879849 | orchestrator | 2026-04-11 02:09:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:09:59.881302 | orchestrator | 2026-04-11 02:09:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:09:59.881349 | orchestrator | 2026-04-11 02:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:02.931148 | orchestrator | 2026-04-11 02:10:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:02.932074 | orchestrator | 2026-04-11 02:10:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:02.932127 | orchestrator | 2026-04-11 02:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:05.979475 | orchestrator | 2026-04-11 02:10:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:05.980581 | orchestrator | 2026-04-11 02:10:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:05.980876 | orchestrator | 2026-04-11 02:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:09.023063 | orchestrator | 2026-04-11 02:10:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:09.025616 | orchestrator | 2026-04-11 02:10:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:09.025724 | orchestrator | 2026-04-11 02:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:12.070255 | orchestrator | 2026-04-11 02:10:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:12.071010 | orchestrator | 2026-04-11 02:10:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:12.071103 | orchestrator | 2026-04-11 02:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:15.115449 | orchestrator | 2026-04-11 02:10:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:15.118786 | orchestrator | 2026-04-11 02:10:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:15.118869 | orchestrator | 2026-04-11 02:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:18.168092 | orchestrator | 2026-04-11 02:10:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:18.169652 | orchestrator | 2026-04-11 02:10:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:18.169692 | orchestrator | 2026-04-11 02:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:21.214698 | orchestrator | 2026-04-11 02:10:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:21.215612 | orchestrator | 2026-04-11 02:10:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:21.215652 | orchestrator | 2026-04-11 02:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:24.260515 | orchestrator | 2026-04-11 02:10:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:24.263251 | orchestrator | 2026-04-11 02:10:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:24.263474 | orchestrator | 2026-04-11 02:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:27.312741 | orchestrator | 2026-04-11 02:10:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:27.314369 | orchestrator | 2026-04-11 02:10:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:27.314394 | orchestrator | 2026-04-11 02:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:30.369437 | orchestrator | 2026-04-11 02:10:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:30.370857 | orchestrator | 2026-04-11 02:10:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:30.370919 | orchestrator | 2026-04-11 02:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:33.418723 | orchestrator | 2026-04-11 02:10:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:33.420430 | orchestrator | 2026-04-11 02:10:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:33.420491 | orchestrator | 2026-04-11 02:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:36.468828 | orchestrator | 2026-04-11 02:10:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:36.471856 | orchestrator | 2026-04-11 02:10:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:36.471916 | orchestrator | 2026-04-11 02:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:39.514566 | orchestrator | 2026-04-11 02:10:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:39.516328 | orchestrator | 2026-04-11 02:10:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:39.516379 | orchestrator | 2026-04-11 02:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:42.560013 | orchestrator | 2026-04-11 02:10:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:42.561479 | orchestrator | 2026-04-11 02:10:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:42.561526 | orchestrator | 2026-04-11 02:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:45.613965 | orchestrator | 2026-04-11 02:10:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:45.615959 | orchestrator | 2026-04-11 02:10:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:45.615987 | orchestrator | 2026-04-11 02:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:48.667756 | orchestrator | 2026-04-11 02:10:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:48.669539 | orchestrator | 2026-04-11 02:10:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:48.669668 | orchestrator | 2026-04-11 02:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:51.715648 | orchestrator | 2026-04-11 02:10:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:51.716364 | orchestrator | 2026-04-11 02:10:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:51.716389 | orchestrator | 2026-04-11 02:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:54.754806 | orchestrator | 2026-04-11 02:10:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:54.756281 | orchestrator | 2026-04-11 02:10:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:54.756320 | orchestrator | 2026-04-11 02:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:10:57.805260 | orchestrator | 2026-04-11 02:10:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:10:57.807601 | orchestrator | 2026-04-11 02:10:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:10:57.807660 | orchestrator | 2026-04-11 02:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:00.857284 | orchestrator | 2026-04-11 02:11:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:00.858706 | orchestrator | 2026-04-11 02:11:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:00.858740 | orchestrator | 2026-04-11 02:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:03.902663 | orchestrator | 2026-04-11 02:11:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:03.904184 | orchestrator | 2026-04-11 02:11:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:03.904236 | orchestrator | 2026-04-11 02:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:06.950507 | orchestrator | 2026-04-11 02:11:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:06.951472 | orchestrator | 2026-04-11 02:11:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:06.951547 | orchestrator | 2026-04-11 02:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:09.994570 | orchestrator | 2026-04-11 02:11:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:09.996010 | orchestrator | 2026-04-11 02:11:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:09.996723 | orchestrator | 2026-04-11 02:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:13.052183 | orchestrator | 2026-04-11 02:11:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:13.052860 | orchestrator | 2026-04-11 02:11:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:13.052906 | orchestrator | 2026-04-11 02:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:16.100154 | orchestrator | 2026-04-11 02:11:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:16.101693 | orchestrator | 2026-04-11 02:11:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:16.101749 | orchestrator | 2026-04-11 02:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:19.152698 | orchestrator | 2026-04-11 02:11:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:19.154512 | orchestrator | 2026-04-11 02:11:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:19.154545 | orchestrator | 2026-04-11 02:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:22.199938 | orchestrator | 2026-04-11 02:11:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:22.202229 | orchestrator | 2026-04-11 02:11:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:22.202308 | orchestrator | 2026-04-11 02:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:25.249586 | orchestrator | 2026-04-11 02:11:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:25.251516 | orchestrator | 2026-04-11 02:11:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:25.251615 | orchestrator | 2026-04-11 02:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:28.301554 | orchestrator | 2026-04-11 02:11:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:28.303738 | orchestrator | 2026-04-11 02:11:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:28.303844 | orchestrator | 2026-04-11 02:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:31.345838 | orchestrator | 2026-04-11 02:11:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:31.346778 | orchestrator | 2026-04-11 02:11:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:31.346816 | orchestrator | 2026-04-11 02:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:34.390809 | orchestrator | 2026-04-11 02:11:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:34.392489 | orchestrator | 2026-04-11 02:11:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:34.392730 | orchestrator | 2026-04-11 02:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:37.441383 | orchestrator | 2026-04-11 02:11:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:37.444656 | orchestrator | 2026-04-11 02:11:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:37.444743 | orchestrator | 2026-04-11 02:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:40.486289 | orchestrator | 2026-04-11 02:11:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:40.487780 | orchestrator | 2026-04-11 02:11:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:40.487847 | orchestrator | 2026-04-11 02:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:43.534456 | orchestrator | 2026-04-11 02:11:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:43.538836 | orchestrator | 2026-04-11 02:11:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:43.539087 | orchestrator | 2026-04-11 02:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:46.588247 | orchestrator | 2026-04-11 02:11:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:46.589663 | orchestrator | 2026-04-11 02:11:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:46.589825 | orchestrator | 2026-04-11 02:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:49.640597 | orchestrator | 2026-04-11 02:11:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:49.642122 | orchestrator | 2026-04-11 02:11:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:49.642184 | orchestrator | 2026-04-11 02:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:52.688661 | orchestrator | 2026-04-11 02:11:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:52.690568 | orchestrator | 2026-04-11 02:11:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:52.690593 | orchestrator | 2026-04-11 02:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:55.730251 | orchestrator | 2026-04-11 02:11:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:55.732607 | orchestrator | 2026-04-11 02:11:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:55.732954 | orchestrator | 2026-04-11 02:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:11:58.775812 | orchestrator | 2026-04-11 02:11:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:11:58.776969 | orchestrator | 2026-04-11 02:11:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:11:58.777046 | orchestrator | 2026-04-11 02:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:01.818584 | orchestrator | 2026-04-11 02:12:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:01.819816 | orchestrator | 2026-04-11 02:12:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:01.819960 | orchestrator | 2026-04-11 02:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:04.863206 | orchestrator | 2026-04-11 02:12:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:04.865440 | orchestrator | 2026-04-11 02:12:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:04.865494 | orchestrator | 2026-04-11 02:12:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:07.912003 | orchestrator | 2026-04-11 02:12:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:07.913089 | orchestrator | 2026-04-11 02:12:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:07.913150 | orchestrator | 2026-04-11 02:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:10.956556 | orchestrator | 2026-04-11 02:12:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:10.957599 | orchestrator | 2026-04-11 02:12:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:10.957643 | orchestrator | 2026-04-11 02:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:14.005310 | orchestrator | 2026-04-11 02:12:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:14.007495 | orchestrator | 2026-04-11 02:12:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:14.007576 | orchestrator | 2026-04-11 02:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:17.053148 | orchestrator | 2026-04-11 02:12:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:17.055068 | orchestrator | 2026-04-11 02:12:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:17.055161 | orchestrator | 2026-04-11 02:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:20.102002 | orchestrator | 2026-04-11 02:12:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:20.103590 | orchestrator | 2026-04-11 02:12:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:20.103984 | orchestrator | 2026-04-11 02:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:23.149537 | orchestrator | 2026-04-11 02:12:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:23.151459 | orchestrator | 2026-04-11 02:12:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:23.151524 | orchestrator | 2026-04-11 02:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:26.196986 | orchestrator | 2026-04-11 02:12:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:26.198829 | orchestrator | 2026-04-11 02:12:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:26.198912 | orchestrator | 2026-04-11 02:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:29.250230 | orchestrator | 2026-04-11 02:12:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:29.251656 | orchestrator | 2026-04-11 02:12:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:29.251739 | orchestrator | 2026-04-11 02:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:32.304280 | orchestrator | 2026-04-11 02:12:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:32.307636 | orchestrator | 2026-04-11 02:12:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:32.307708 | orchestrator | 2026-04-11 02:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:35.361677 | orchestrator | 2026-04-11 02:12:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:35.363489 | orchestrator | 2026-04-11 02:12:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:35.363525 | orchestrator | 2026-04-11 02:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:38.417526 | orchestrator | 2026-04-11 02:12:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:38.420190 | orchestrator | 2026-04-11 02:12:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:38.420285 | orchestrator | 2026-04-11 02:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:41.461255 | orchestrator | 2026-04-11 02:12:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:41.465052 | orchestrator | 2026-04-11 02:12:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:41.465174 | orchestrator | 2026-04-11 02:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:44.512888 | orchestrator | 2026-04-11 02:12:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:44.514318 | orchestrator | 2026-04-11 02:12:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:44.514378 | orchestrator | 2026-04-11 02:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:47.563174 | orchestrator | 2026-04-11 02:12:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:47.565607 | orchestrator | 2026-04-11 02:12:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:47.565661 | orchestrator | 2026-04-11 02:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:50.613950 | orchestrator | 2026-04-11 02:12:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:50.616117 | orchestrator | 2026-04-11 02:12:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:50.616284 | orchestrator | 2026-04-11 02:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:53.664966 | orchestrator | 2026-04-11 02:12:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:53.665713 | orchestrator | 2026-04-11 02:12:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:53.665797 | orchestrator | 2026-04-11 02:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:56.702624 | orchestrator | 2026-04-11 02:12:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:56.703490 | orchestrator | 2026-04-11 02:12:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:56.703543 | orchestrator | 2026-04-11 02:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:12:59.754932 | orchestrator | 2026-04-11 02:12:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:12:59.756933 | orchestrator | 2026-04-11 02:12:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:12:59.757001 | orchestrator | 2026-04-11 02:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:02.807302 | orchestrator | 2026-04-11 02:13:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:02.808449 | orchestrator | 2026-04-11 02:13:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:02.808505 | orchestrator | 2026-04-11 02:13:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:05.858599 | orchestrator | 2026-04-11 02:13:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:05.860004 | orchestrator | 2026-04-11 02:13:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:05.860039 | orchestrator | 2026-04-11 02:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:08.909534 | orchestrator | 2026-04-11 02:13:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:08.911091 | orchestrator | 2026-04-11 02:13:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:08.911150 | orchestrator | 2026-04-11 02:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:11.958265 | orchestrator | 2026-04-11 02:13:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:11.959673 | orchestrator | 2026-04-11 02:13:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:11.959849 | orchestrator | 2026-04-11 02:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:15.007475 | orchestrator | 2026-04-11 02:13:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:15.009421 | orchestrator | 2026-04-11 02:13:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:15.009597 | orchestrator | 2026-04-11 02:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:18.050819 | orchestrator | 2026-04-11 02:13:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:18.052071 | orchestrator | 2026-04-11 02:13:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:18.052149 | orchestrator | 2026-04-11 02:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:21.095765 | orchestrator | 2026-04-11 02:13:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:21.097606 | orchestrator | 2026-04-11 02:13:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:21.098278 | orchestrator | 2026-04-11 02:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:24.141273 | orchestrator | 2026-04-11 02:13:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:24.143651 | orchestrator | 2026-04-11 02:13:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:24.143721 | orchestrator | 2026-04-11 02:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:27.193039 | orchestrator | 2026-04-11 02:13:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:27.195447 | orchestrator | 2026-04-11 02:13:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:27.195531 | orchestrator | 2026-04-11 02:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:30.236429 | orchestrator | 2026-04-11 02:13:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:30.237421 | orchestrator | 2026-04-11 02:13:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:30.237461 | orchestrator | 2026-04-11 02:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:33.281815 | orchestrator | 2026-04-11 02:13:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:33.283983 | orchestrator | 2026-04-11 02:13:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:33.284125 | orchestrator | 2026-04-11 02:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:36.334395 | orchestrator | 2026-04-11 02:13:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:36.336761 | orchestrator | 2026-04-11 02:13:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:36.336839 | orchestrator | 2026-04-11 02:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:39.392229 | orchestrator | 2026-04-11 02:13:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:39.393801 | orchestrator | 2026-04-11 02:13:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:39.394312 | orchestrator | 2026-04-11 02:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:42.442280 | orchestrator | 2026-04-11 02:13:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:42.444403 | orchestrator | 2026-04-11 02:13:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:42.444481 | orchestrator | 2026-04-11 02:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:45.490913 | orchestrator | 2026-04-11 02:13:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:45.492222 | orchestrator | 2026-04-11 02:13:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:45.492285 | orchestrator | 2026-04-11 02:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:48.538046 | orchestrator | 2026-04-11 02:13:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:48.539789 | orchestrator | 2026-04-11 02:13:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:48.539884 | orchestrator | 2026-04-11 02:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:51.586103 | orchestrator | 2026-04-11 02:13:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:51.588070 | orchestrator | 2026-04-11 02:13:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:51.588125 | orchestrator | 2026-04-11 02:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:54.637169 | orchestrator | 2026-04-11 02:13:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:54.638506 | orchestrator | 2026-04-11 02:13:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:54.638642 | orchestrator | 2026-04-11 02:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:13:57.683494 | orchestrator | 2026-04-11 02:13:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:13:57.686182 | orchestrator | 2026-04-11 02:13:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:13:57.686233 | orchestrator | 2026-04-11 02:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:00.733152 | orchestrator | 2026-04-11 02:14:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:00.735919 | orchestrator | 2026-04-11 02:14:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:00.736034 | orchestrator | 2026-04-11 02:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:03.783804 | orchestrator | 2026-04-11 02:14:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:03.785679 | orchestrator | 2026-04-11 02:14:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:03.785799 | orchestrator | 2026-04-11 02:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:06.829914 | orchestrator | 2026-04-11 02:14:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:06.831815 | orchestrator | 2026-04-11 02:14:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:06.831895 | orchestrator | 2026-04-11 02:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:09.884090 | orchestrator | 2026-04-11 02:14:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:09.885642 | orchestrator | 2026-04-11 02:14:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:09.885694 | orchestrator | 2026-04-11 02:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:12.931003 | orchestrator | 2026-04-11 02:14:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:12.932506 | orchestrator | 2026-04-11 02:14:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:12.932557 | orchestrator | 2026-04-11 02:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:15.979876 | orchestrator | 2026-04-11 02:14:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:15.981229 | orchestrator | 2026-04-11 02:14:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:15.981268 | orchestrator | 2026-04-11 02:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:19.041217 | orchestrator | 2026-04-11 02:14:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:19.042538 | orchestrator | 2026-04-11 02:14:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:19.042635 | orchestrator | 2026-04-11 02:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:22.091458 | orchestrator | 2026-04-11 02:14:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:22.092964 | orchestrator | 2026-04-11 02:14:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:22.093023 | orchestrator | 2026-04-11 02:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:25.140986 | orchestrator | 2026-04-11 02:14:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:25.143324 | orchestrator | 2026-04-11 02:14:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:25.143405 | orchestrator | 2026-04-11 02:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:28.187751 | orchestrator | 2026-04-11 02:14:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:28.191228 | orchestrator | 2026-04-11 02:14:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:28.191310 | orchestrator | 2026-04-11 02:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:31.236888 | orchestrator | 2026-04-11 02:14:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:31.238518 | orchestrator | 2026-04-11 02:14:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:31.238586 | orchestrator | 2026-04-11 02:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:34.283533 | orchestrator | 2026-04-11 02:14:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:34.285998 | orchestrator | 2026-04-11 02:14:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:34.286113 | orchestrator | 2026-04-11 02:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:37.336291 | orchestrator | 2026-04-11 02:14:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:37.338307 | orchestrator | 2026-04-11 02:14:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:37.338395 | orchestrator | 2026-04-11 02:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:40.393364 | orchestrator | 2026-04-11 02:14:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:40.394514 | orchestrator | 2026-04-11 02:14:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:40.394611 | orchestrator | 2026-04-11 02:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:43.443238 | orchestrator | 2026-04-11 02:14:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:43.446396 | orchestrator | 2026-04-11 02:14:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:43.446525 | orchestrator | 2026-04-11 02:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:46.494981 | orchestrator | 2026-04-11 02:14:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:46.496337 | orchestrator | 2026-04-11 02:14:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:46.496430 | orchestrator | 2026-04-11 02:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:49.540849 | orchestrator | 2026-04-11 02:14:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:49.542699 | orchestrator | 2026-04-11 02:14:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:49.542801 | orchestrator | 2026-04-11 02:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:52.588238 | orchestrator | 2026-04-11 02:14:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:52.590793 | orchestrator | 2026-04-11 02:14:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:52.591177 | orchestrator | 2026-04-11 02:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:55.648836 | orchestrator | 2026-04-11 02:14:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:55.650303 | orchestrator | 2026-04-11 02:14:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:55.650356 | orchestrator | 2026-04-11 02:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:14:58.690712 | orchestrator | 2026-04-11 02:14:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:14:58.691311 | orchestrator | 2026-04-11 02:14:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:14:58.691345 | orchestrator | 2026-04-11 02:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:01.738269 | orchestrator | 2026-04-11 02:15:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:01.740414 | orchestrator | 2026-04-11 02:15:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:01.740460 | orchestrator | 2026-04-11 02:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:04.784767 | orchestrator | 2026-04-11 02:15:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:04.786925 | orchestrator | 2026-04-11 02:15:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:04.786982 | orchestrator | 2026-04-11 02:15:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:07.832694 | orchestrator | 2026-04-11 02:15:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:07.833620 | orchestrator | 2026-04-11 02:15:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:07.833652 | orchestrator | 2026-04-11 02:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:10.882001 | orchestrator | 2026-04-11 02:15:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:10.882714 | orchestrator | 2026-04-11 02:15:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:10.882764 | orchestrator | 2026-04-11 02:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:13.925118 | orchestrator | 2026-04-11 02:15:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:13.926133 | orchestrator | 2026-04-11 02:15:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:13.926171 | orchestrator | 2026-04-11 02:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:16.973909 | orchestrator | 2026-04-11 02:15:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:16.976707 | orchestrator | 2026-04-11 02:15:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:16.976772 | orchestrator | 2026-04-11 02:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:20.024152 | orchestrator | 2026-04-11 02:15:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:20.025930 | orchestrator | 2026-04-11 02:15:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:20.025994 | orchestrator | 2026-04-11 02:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:23.070657 | orchestrator | 2026-04-11 02:15:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:23.073048 | orchestrator | 2026-04-11 02:15:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:23.073123 | orchestrator | 2026-04-11 02:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:26.117604 | orchestrator | 2026-04-11 02:15:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:26.119802 | orchestrator | 2026-04-11 02:15:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:26.119868 | orchestrator | 2026-04-11 02:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:29.161806 | orchestrator | 2026-04-11 02:15:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:29.164288 | orchestrator | 2026-04-11 02:15:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:29.164392 | orchestrator | 2026-04-11 02:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:32.216608 | orchestrator | 2026-04-11 02:15:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:32.217847 | orchestrator | 2026-04-11 02:15:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:32.217880 | orchestrator | 2026-04-11 02:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:35.266374 | orchestrator | 2026-04-11 02:15:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:35.267976 | orchestrator | 2026-04-11 02:15:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:35.268400 | orchestrator | 2026-04-11 02:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:38.309365 | orchestrator | 2026-04-11 02:15:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:38.310906 | orchestrator | 2026-04-11 02:15:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:38.310956 | orchestrator | 2026-04-11 02:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:41.358646 | orchestrator | 2026-04-11 02:15:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:41.360822 | orchestrator | 2026-04-11 02:15:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:41.361096 | orchestrator | 2026-04-11 02:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:44.407285 | orchestrator | 2026-04-11 02:15:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:44.408301 | orchestrator | 2026-04-11 02:15:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:44.408330 | orchestrator | 2026-04-11 02:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:47.455408 | orchestrator | 2026-04-11 02:15:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:47.457164 | orchestrator | 2026-04-11 02:15:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:47.457224 | orchestrator | 2026-04-11 02:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:50.497049 | orchestrator | 2026-04-11 02:15:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:50.498346 | orchestrator | 2026-04-11 02:15:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:50.498397 | orchestrator | 2026-04-11 02:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:53.545827 | orchestrator | 2026-04-11 02:15:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:53.547850 | orchestrator | 2026-04-11 02:15:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:53.547885 | orchestrator | 2026-04-11 02:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:56.601364 | orchestrator | 2026-04-11 02:15:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:56.602977 | orchestrator | 2026-04-11 02:15:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:56.603083 | orchestrator | 2026-04-11 02:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:15:59.654204 | orchestrator | 2026-04-11 02:15:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:15:59.655889 | orchestrator | 2026-04-11 02:15:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:15:59.655983 | orchestrator | 2026-04-11 02:15:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:02.703376 | orchestrator | 2026-04-11 02:16:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:02.708229 | orchestrator | 2026-04-11 02:16:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:02.708330 | orchestrator | 2026-04-11 02:16:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:05.755611 | orchestrator | 2026-04-11 02:16:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:05.756798 | orchestrator | 2026-04-11 02:16:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:05.756854 | orchestrator | 2026-04-11 02:16:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:08.811329 | orchestrator | 2026-04-11 02:16:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:08.812744 | orchestrator | 2026-04-11 02:16:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:08.812838 | orchestrator | 2026-04-11 02:16:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:11.869729 | orchestrator | 2026-04-11 02:16:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:11.871580 | orchestrator | 2026-04-11 02:16:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:11.871725 | orchestrator | 2026-04-11 02:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:14.918871 | orchestrator | 2026-04-11 02:16:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:14.919567 | orchestrator | 2026-04-11 02:16:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:14.919600 | orchestrator | 2026-04-11 02:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:17.964843 | orchestrator | 2026-04-11 02:16:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:17.966429 | orchestrator | 2026-04-11 02:16:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:17.966569 | orchestrator | 2026-04-11 02:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:21.018537 | orchestrator | 2026-04-11 02:16:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:21.021334 | orchestrator | 2026-04-11 02:16:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:21.021428 | orchestrator | 2026-04-11 02:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:24.063449 | orchestrator | 2026-04-11 02:16:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:24.065094 | orchestrator | 2026-04-11 02:16:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:24.065138 | orchestrator | 2026-04-11 02:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:27.111587 | orchestrator | 2026-04-11 02:16:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:27.112207 | orchestrator | 2026-04-11 02:16:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:27.112258 | orchestrator | 2026-04-11 02:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:30.168261 | orchestrator | 2026-04-11 02:16:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:30.169677 | orchestrator | 2026-04-11 02:16:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:30.169717 | orchestrator | 2026-04-11 02:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:33.218714 | orchestrator | 2026-04-11 02:16:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:33.219538 | orchestrator | 2026-04-11 02:16:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:33.219607 | orchestrator | 2026-04-11 02:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:36.267500 | orchestrator | 2026-04-11 02:16:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:36.268159 | orchestrator | 2026-04-11 02:16:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:36.268409 | orchestrator | 2026-04-11 02:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:39.324615 | orchestrator | 2026-04-11 02:16:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:39.326772 | orchestrator | 2026-04-11 02:16:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:39.326829 | orchestrator | 2026-04-11 02:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:42.382520 | orchestrator | 2026-04-11 02:16:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:42.383870 | orchestrator | 2026-04-11 02:16:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:42.383916 | orchestrator | 2026-04-11 02:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:45.430588 | orchestrator | 2026-04-11 02:16:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:45.432624 | orchestrator | 2026-04-11 02:16:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:45.432772 | orchestrator | 2026-04-11 02:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:48.480483 | orchestrator | 2026-04-11 02:16:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:48.482389 | orchestrator | 2026-04-11 02:16:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:48.482521 | orchestrator | 2026-04-11 02:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:51.528310 | orchestrator | 2026-04-11 02:16:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:51.529777 | orchestrator | 2026-04-11 02:16:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:51.529848 | orchestrator | 2026-04-11 02:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:54.577241 | orchestrator | 2026-04-11 02:16:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:54.578579 | orchestrator | 2026-04-11 02:16:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:54.578641 | orchestrator | 2026-04-11 02:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:16:57.624230 | orchestrator | 2026-04-11 02:16:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:16:57.625808 | orchestrator | 2026-04-11 02:16:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:16:57.626124 | orchestrator | 2026-04-11 02:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:00.675228 | orchestrator | 2026-04-11 02:17:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:00.676491 | orchestrator | 2026-04-11 02:17:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:00.676896 | orchestrator | 2026-04-11 02:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:03.724286 | orchestrator | 2026-04-11 02:17:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:03.725942 | orchestrator | 2026-04-11 02:17:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:03.726075 | orchestrator | 2026-04-11 02:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:06.772764 | orchestrator | 2026-04-11 02:17:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:06.775028 | orchestrator | 2026-04-11 02:17:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:06.775139 | orchestrator | 2026-04-11 02:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:09.823245 | orchestrator | 2026-04-11 02:17:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:09.824751 | orchestrator | 2026-04-11 02:17:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:09.825083 | orchestrator | 2026-04-11 02:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:12.869082 | orchestrator | 2026-04-11 02:17:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:12.871016 | orchestrator | 2026-04-11 02:17:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:12.871169 | orchestrator | 2026-04-11 02:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:15.921052 | orchestrator | 2026-04-11 02:17:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:15.921363 | orchestrator | 2026-04-11 02:17:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:15.921440 | orchestrator | 2026-04-11 02:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:18.969692 | orchestrator | 2026-04-11 02:17:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:18.971334 | orchestrator | 2026-04-11 02:17:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:18.971449 | orchestrator | 2026-04-11 02:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:22.032720 | orchestrator | 2026-04-11 02:17:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:22.035252 | orchestrator | 2026-04-11 02:17:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:22.035324 | orchestrator | 2026-04-11 02:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:25.083779 | orchestrator | 2026-04-11 02:17:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:25.085898 | orchestrator | 2026-04-11 02:17:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:25.085996 | orchestrator | 2026-04-11 02:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:28.156364 | orchestrator | 2026-04-11 02:17:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:28.156557 | orchestrator | 2026-04-11 02:17:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:28.156849 | orchestrator | 2026-04-11 02:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:31.199575 | orchestrator | 2026-04-11 02:17:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:31.201127 | orchestrator | 2026-04-11 02:17:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:31.201188 | orchestrator | 2026-04-11 02:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:34.247505 | orchestrator | 2026-04-11 02:17:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:34.248862 | orchestrator | 2026-04-11 02:17:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:34.248897 | orchestrator | 2026-04-11 02:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:37.299606 | orchestrator | 2026-04-11 02:17:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:37.301072 | orchestrator | 2026-04-11 02:17:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:37.301121 | orchestrator | 2026-04-11 02:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:40.349884 | orchestrator | 2026-04-11 02:17:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:40.351156 | orchestrator | 2026-04-11 02:17:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:40.351198 | orchestrator | 2026-04-11 02:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:43.403862 | orchestrator | 2026-04-11 02:17:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:43.406245 | orchestrator | 2026-04-11 02:17:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:43.406357 | orchestrator | 2026-04-11 02:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:46.458511 | orchestrator | 2026-04-11 02:17:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:46.459517 | orchestrator | 2026-04-11 02:17:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:46.459612 | orchestrator | 2026-04-11 02:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:49.510230 | orchestrator | 2026-04-11 02:17:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:49.512295 | orchestrator | 2026-04-11 02:17:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:49.512410 | orchestrator | 2026-04-11 02:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:52.554668 | orchestrator | 2026-04-11 02:17:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:52.556067 | orchestrator | 2026-04-11 02:17:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:52.556110 | orchestrator | 2026-04-11 02:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:55.604419 | orchestrator | 2026-04-11 02:17:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:55.606229 | orchestrator | 2026-04-11 02:17:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:55.606321 | orchestrator | 2026-04-11 02:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:17:58.651317 | orchestrator | 2026-04-11 02:17:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:17:58.652702 | orchestrator | 2026-04-11 02:17:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:17:58.652745 | orchestrator | 2026-04-11 02:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:01.697033 | orchestrator | 2026-04-11 02:18:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:01.699163 | orchestrator | 2026-04-11 02:18:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:01.699199 | orchestrator | 2026-04-11 02:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:04.743731 | orchestrator | 2026-04-11 02:18:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:04.745608 | orchestrator | 2026-04-11 02:18:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:04.745654 | orchestrator | 2026-04-11 02:18:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:07.791856 | orchestrator | 2026-04-11 02:18:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:07.793650 | orchestrator | 2026-04-11 02:18:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:07.793695 | orchestrator | 2026-04-11 02:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:10.839684 | orchestrator | 2026-04-11 02:18:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:10.841981 | orchestrator | 2026-04-11 02:18:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:10.842087 | orchestrator | 2026-04-11 02:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:13.887909 | orchestrator | 2026-04-11 02:18:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:13.889381 | orchestrator | 2026-04-11 02:18:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:13.889421 | orchestrator | 2026-04-11 02:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:16.938691 | orchestrator | 2026-04-11 02:18:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:16.942383 | orchestrator | 2026-04-11 02:18:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:16.942484 | orchestrator | 2026-04-11 02:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:19.988437 | orchestrator | 2026-04-11 02:18:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:19.991405 | orchestrator | 2026-04-11 02:18:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:19.991458 | orchestrator | 2026-04-11 02:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:23.044350 | orchestrator | 2026-04-11 02:18:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:23.045889 | orchestrator | 2026-04-11 02:18:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:23.046134 | orchestrator | 2026-04-11 02:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:26.084848 | orchestrator | 2026-04-11 02:18:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:26.086516 | orchestrator | 2026-04-11 02:18:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:26.086779 | orchestrator | 2026-04-11 02:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:29.130494 | orchestrator | 2026-04-11 02:18:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:29.131747 | orchestrator | 2026-04-11 02:18:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:29.131767 | orchestrator | 2026-04-11 02:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:32.182308 | orchestrator | 2026-04-11 02:18:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:32.182686 | orchestrator | 2026-04-11 02:18:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:32.183224 | orchestrator | 2026-04-11 02:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:35.229343 | orchestrator | 2026-04-11 02:18:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:35.231053 | orchestrator | 2026-04-11 02:18:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:35.231120 | orchestrator | 2026-04-11 02:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:38.278504 | orchestrator | 2026-04-11 02:18:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:38.281362 | orchestrator | 2026-04-11 02:18:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:38.281439 | orchestrator | 2026-04-11 02:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:41.336943 | orchestrator | 2026-04-11 02:18:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:41.338834 | orchestrator | 2026-04-11 02:18:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:41.338980 | orchestrator | 2026-04-11 02:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:44.386781 | orchestrator | 2026-04-11 02:18:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:44.387867 | orchestrator | 2026-04-11 02:18:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:44.387889 | orchestrator | 2026-04-11 02:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:47.442097 | orchestrator | 2026-04-11 02:18:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:47.443552 | orchestrator | 2026-04-11 02:18:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:47.443596 | orchestrator | 2026-04-11 02:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:50.488404 | orchestrator | 2026-04-11 02:18:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:50.490564 | orchestrator | 2026-04-11 02:18:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:50.490629 | orchestrator | 2026-04-11 02:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:53.534394 | orchestrator | 2026-04-11 02:18:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:53.534667 | orchestrator | 2026-04-11 02:18:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:53.534693 | orchestrator | 2026-04-11 02:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:56.576862 | orchestrator | 2026-04-11 02:18:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:56.578746 | orchestrator | 2026-04-11 02:18:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:56.578793 | orchestrator | 2026-04-11 02:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:18:59.626257 | orchestrator | 2026-04-11 02:18:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:18:59.627700 | orchestrator | 2026-04-11 02:18:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:18:59.627789 | orchestrator | 2026-04-11 02:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:02.677719 | orchestrator | 2026-04-11 02:19:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:02.680247 | orchestrator | 2026-04-11 02:19:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:02.680321 | orchestrator | 2026-04-11 02:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:05.727236 | orchestrator | 2026-04-11 02:19:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:05.730094 | orchestrator | 2026-04-11 02:19:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:05.730154 | orchestrator | 2026-04-11 02:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:08.776707 | orchestrator | 2026-04-11 02:19:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:08.778695 | orchestrator | 2026-04-11 02:19:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:08.779084 | orchestrator | 2026-04-11 02:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:11.827374 | orchestrator | 2026-04-11 02:19:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:11.829075 | orchestrator | 2026-04-11 02:19:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:11.829150 | orchestrator | 2026-04-11 02:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:14.873609 | orchestrator | 2026-04-11 02:19:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:14.876297 | orchestrator | 2026-04-11 02:19:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:14.876360 | orchestrator | 2026-04-11 02:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:17.923030 | orchestrator | 2026-04-11 02:19:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:17.924466 | orchestrator | 2026-04-11 02:19:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:17.924731 | orchestrator | 2026-04-11 02:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:20.966324 | orchestrator | 2026-04-11 02:19:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:20.968414 | orchestrator | 2026-04-11 02:19:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:20.968452 | orchestrator | 2026-04-11 02:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:24.007538 | orchestrator | 2026-04-11 02:19:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:24.009454 | orchestrator | 2026-04-11 02:19:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:24.009527 | orchestrator | 2026-04-11 02:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:27.062928 | orchestrator | 2026-04-11 02:19:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:27.065092 | orchestrator | 2026-04-11 02:19:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:27.065149 | orchestrator | 2026-04-11 02:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:30.116227 | orchestrator | 2026-04-11 02:19:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:30.118880 | orchestrator | 2026-04-11 02:19:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:30.119063 | orchestrator | 2026-04-11 02:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:33.162706 | orchestrator | 2026-04-11 02:19:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:33.164948 | orchestrator | 2026-04-11 02:19:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:33.164993 | orchestrator | 2026-04-11 02:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:36.209810 | orchestrator | 2026-04-11 02:19:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:36.211108 | orchestrator | 2026-04-11 02:19:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:36.211197 | orchestrator | 2026-04-11 02:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:39.262472 | orchestrator | 2026-04-11 02:19:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:39.265084 | orchestrator | 2026-04-11 02:19:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:39.265180 | orchestrator | 2026-04-11 02:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:42.310371 | orchestrator | 2026-04-11 02:19:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:42.311489 | orchestrator | 2026-04-11 02:19:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:42.311534 | orchestrator | 2026-04-11 02:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:45.371569 | orchestrator | 2026-04-11 02:19:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:45.373494 | orchestrator | 2026-04-11 02:19:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:45.373719 | orchestrator | 2026-04-11 02:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:48.418480 | orchestrator | 2026-04-11 02:19:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:48.421792 | orchestrator | 2026-04-11 02:19:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:48.421863 | orchestrator | 2026-04-11 02:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:51.465708 | orchestrator | 2026-04-11 02:19:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:51.468108 | orchestrator | 2026-04-11 02:19:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:51.468209 | orchestrator | 2026-04-11 02:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:54.517527 | orchestrator | 2026-04-11 02:19:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:54.518898 | orchestrator | 2026-04-11 02:19:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:54.518978 | orchestrator | 2026-04-11 02:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:19:57.569830 | orchestrator | 2026-04-11 02:19:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:19:57.571297 | orchestrator | 2026-04-11 02:19:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:19:57.571413 | orchestrator | 2026-04-11 02:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:00.609617 | orchestrator | 2026-04-11 02:20:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:00.611244 | orchestrator | 2026-04-11 02:20:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:00.611387 | orchestrator | 2026-04-11 02:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:03.656626 | orchestrator | 2026-04-11 02:20:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:03.658556 | orchestrator | 2026-04-11 02:20:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:03.658626 | orchestrator | 2026-04-11 02:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:06.712077 | orchestrator | 2026-04-11 02:20:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:06.713621 | orchestrator | 2026-04-11 02:20:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:06.713748 | orchestrator | 2026-04-11 02:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:09.764683 | orchestrator | 2026-04-11 02:20:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:09.766417 | orchestrator | 2026-04-11 02:20:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:09.766446 | orchestrator | 2026-04-11 02:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:12.813887 | orchestrator | 2026-04-11 02:20:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:12.815454 | orchestrator | 2026-04-11 02:20:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:12.816464 | orchestrator | 2026-04-11 02:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:15.864677 | orchestrator | 2026-04-11 02:20:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:15.867563 | orchestrator | 2026-04-11 02:20:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:15.867668 | orchestrator | 2026-04-11 02:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:18.914408 | orchestrator | 2026-04-11 02:20:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:18.914814 | orchestrator | 2026-04-11 02:20:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:18.914838 | orchestrator | 2026-04-11 02:20:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:21.955704 | orchestrator | 2026-04-11 02:20:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:21.957876 | orchestrator | 2026-04-11 02:20:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:21.957961 | orchestrator | 2026-04-11 02:20:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:24.994621 | orchestrator | 2026-04-11 02:20:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:24.994737 | orchestrator | 2026-04-11 02:20:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:24.994747 | orchestrator | 2026-04-11 02:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:28.036897 | orchestrator | 2026-04-11 02:20:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:28.037898 | orchestrator | 2026-04-11 02:20:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:28.037984 | orchestrator | 2026-04-11 02:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:20:31.088344 | orchestrator | 2026-04-11 02:20:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:20:31.088734 | orchestrator | 2026-04-11 02:20:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:20:31.088813 | orchestrator | 2026-04-11 02:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:34.231353 | orchestrator | 2026-04-11 02:22:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:34.231450 | orchestrator | 2026-04-11 02:22:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:34.231460 | orchestrator | 2026-04-11 02:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:37.276798 | orchestrator | 2026-04-11 02:22:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:37.278906 | orchestrator | 2026-04-11 02:22:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:37.279079 | orchestrator | 2026-04-11 02:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:40.327548 | orchestrator | 2026-04-11 02:22:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:40.329655 | orchestrator | 2026-04-11 02:22:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:40.329708 | orchestrator | 2026-04-11 02:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:43.382334 | orchestrator | 2026-04-11 02:22:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:43.383758 | orchestrator | 2026-04-11 02:22:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:43.383809 | orchestrator | 2026-04-11 02:22:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:46.425596 | orchestrator | 2026-04-11 02:22:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:46.427389 | orchestrator | 2026-04-11 02:22:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:46.427726 | orchestrator | 2026-04-11 02:22:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:49.477196 | orchestrator | 2026-04-11 02:22:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:49.480236 | orchestrator | 2026-04-11 02:22:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:49.480294 | orchestrator | 2026-04-11 02:22:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:52.536689 | orchestrator | 2026-04-11 02:22:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:52.538863 | orchestrator | 2026-04-11 02:22:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:52.538908 | orchestrator | 2026-04-11 02:22:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:55.584368 | orchestrator | 2026-04-11 02:22:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:55.587104 | orchestrator | 2026-04-11 02:22:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:55.587271 | orchestrator | 2026-04-11 02:22:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:22:58.633064 | orchestrator | 2026-04-11 02:22:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:22:58.635322 | orchestrator | 2026-04-11 02:22:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:22:58.635501 | orchestrator | 2026-04-11 02:22:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:01.683701 | orchestrator | 2026-04-11 02:23:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:01.685150 | orchestrator | 2026-04-11 02:23:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:01.685213 | orchestrator | 2026-04-11 02:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:04.733462 | orchestrator | 2026-04-11 02:23:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:04.735468 | orchestrator | 2026-04-11 02:23:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:04.735551 | orchestrator | 2026-04-11 02:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:07.784729 | orchestrator | 2026-04-11 02:23:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:07.787768 | orchestrator | 2026-04-11 02:23:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:07.787834 | orchestrator | 2026-04-11 02:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:10.838248 | orchestrator | 2026-04-11 02:23:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:10.839973 | orchestrator | 2026-04-11 02:23:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:10.840046 | orchestrator | 2026-04-11 02:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:13.879443 | orchestrator | 2026-04-11 02:23:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:13.880302 | orchestrator | 2026-04-11 02:23:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:13.880333 | orchestrator | 2026-04-11 02:23:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:16.925234 | orchestrator | 2026-04-11 02:23:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:16.928232 | orchestrator | 2026-04-11 02:23:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:16.928329 | orchestrator | 2026-04-11 02:23:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:19.965114 | orchestrator | 2026-04-11 02:23:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:19.966975 | orchestrator | 2026-04-11 02:23:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:19.967066 | orchestrator | 2026-04-11 02:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:23.019975 | orchestrator | 2026-04-11 02:23:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:23.021757 | orchestrator | 2026-04-11 02:23:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:23.021797 | orchestrator | 2026-04-11 02:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:26.074867 | orchestrator | 2026-04-11 02:23:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:26.076601 | orchestrator | 2026-04-11 02:23:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:26.076659 | orchestrator | 2026-04-11 02:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:29.132424 | orchestrator | 2026-04-11 02:23:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:29.134470 | orchestrator | 2026-04-11 02:23:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:29.134521 | orchestrator | 2026-04-11 02:23:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:32.177152 | orchestrator | 2026-04-11 02:23:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:32.181037 | orchestrator | 2026-04-11 02:23:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:32.181102 | orchestrator | 2026-04-11 02:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:35.224127 | orchestrator | 2026-04-11 02:23:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:35.225755 | orchestrator | 2026-04-11 02:23:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:35.225839 | orchestrator | 2026-04-11 02:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:38.270815 | orchestrator | 2026-04-11 02:23:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:38.272365 | orchestrator | 2026-04-11 02:23:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:38.272441 | orchestrator | 2026-04-11 02:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:41.316362 | orchestrator | 2026-04-11 02:23:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:41.318500 | orchestrator | 2026-04-11 02:23:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:41.318557 | orchestrator | 2026-04-11 02:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:44.362662 | orchestrator | 2026-04-11 02:23:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:44.363590 | orchestrator | 2026-04-11 02:23:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:44.363650 | orchestrator | 2026-04-11 02:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:47.409930 | orchestrator | 2026-04-11 02:23:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:47.412779 | orchestrator | 2026-04-11 02:23:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:47.412863 | orchestrator | 2026-04-11 02:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:50.453163 | orchestrator | 2026-04-11 02:23:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:50.454865 | orchestrator | 2026-04-11 02:23:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:50.454949 | orchestrator | 2026-04-11 02:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:53.493777 | orchestrator | 2026-04-11 02:23:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:53.494861 | orchestrator | 2026-04-11 02:23:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:53.494930 | orchestrator | 2026-04-11 02:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:56.539200 | orchestrator | 2026-04-11 02:23:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:56.540078 | orchestrator | 2026-04-11 02:23:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:56.540132 | orchestrator | 2026-04-11 02:23:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:23:59.586945 | orchestrator | 2026-04-11 02:23:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:23:59.590480 | orchestrator | 2026-04-11 02:23:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:23:59.590635 | orchestrator | 2026-04-11 02:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:02.642607 | orchestrator | 2026-04-11 02:24:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:02.644352 | orchestrator | 2026-04-11 02:24:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:02.644421 | orchestrator | 2026-04-11 02:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:05.688498 | orchestrator | 2026-04-11 02:24:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:05.689245 | orchestrator | 2026-04-11 02:24:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:05.689292 | orchestrator | 2026-04-11 02:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:08.740369 | orchestrator | 2026-04-11 02:24:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:08.741704 | orchestrator | 2026-04-11 02:24:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:08.741755 | orchestrator | 2026-04-11 02:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:11.783638 | orchestrator | 2026-04-11 02:24:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:11.785553 | orchestrator | 2026-04-11 02:24:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:11.785616 | orchestrator | 2026-04-11 02:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:14.825044 | orchestrator | 2026-04-11 02:24:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:14.826103 | orchestrator | 2026-04-11 02:24:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:14.826184 | orchestrator | 2026-04-11 02:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:17.864212 | orchestrator | 2026-04-11 02:24:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:17.865643 | orchestrator | 2026-04-11 02:24:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:17.865712 | orchestrator | 2026-04-11 02:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:20.905709 | orchestrator | 2026-04-11 02:24:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:20.908383 | orchestrator | 2026-04-11 02:24:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:20.908477 | orchestrator | 2026-04-11 02:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:23.952130 | orchestrator | 2026-04-11 02:24:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:23.954637 | orchestrator | 2026-04-11 02:24:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:23.954674 | orchestrator | 2026-04-11 02:24:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:27.008147 | orchestrator | 2026-04-11 02:24:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:27.011311 | orchestrator | 2026-04-11 02:24:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:27.011416 | orchestrator | 2026-04-11 02:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:30.063663 | orchestrator | 2026-04-11 02:24:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:30.067702 | orchestrator | 2026-04-11 02:24:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:30.067853 | orchestrator | 2026-04-11 02:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:33.115246 | orchestrator | 2026-04-11 02:24:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:33.116993 | orchestrator | 2026-04-11 02:24:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:33.117048 | orchestrator | 2026-04-11 02:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:36.162854 | orchestrator | 2026-04-11 02:24:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:36.164198 | orchestrator | 2026-04-11 02:24:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:36.164243 | orchestrator | 2026-04-11 02:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:39.207746 | orchestrator | 2026-04-11 02:24:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:39.209111 | orchestrator | 2026-04-11 02:24:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:39.209208 | orchestrator | 2026-04-11 02:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:42.263223 | orchestrator | 2026-04-11 02:24:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:42.265991 | orchestrator | 2026-04-11 02:24:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:42.266070 | orchestrator | 2026-04-11 02:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:45.311777 | orchestrator | 2026-04-11 02:24:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:45.312501 | orchestrator | 2026-04-11 02:24:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:45.312557 | orchestrator | 2026-04-11 02:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:48.367129 | orchestrator | 2026-04-11 02:24:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:48.368592 | orchestrator | 2026-04-11 02:24:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:48.368647 | orchestrator | 2026-04-11 02:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:51.422245 | orchestrator | 2026-04-11 02:24:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:51.427157 | orchestrator | 2026-04-11 02:24:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:51.427259 | orchestrator | 2026-04-11 02:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:54.477520 | orchestrator | 2026-04-11 02:24:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:54.479000 | orchestrator | 2026-04-11 02:24:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:54.479043 | orchestrator | 2026-04-11 02:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:24:57.527033 | orchestrator | 2026-04-11 02:24:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:24:57.529434 | orchestrator | 2026-04-11 02:24:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:24:57.529508 | orchestrator | 2026-04-11 02:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:00.581115 | orchestrator | 2026-04-11 02:25:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:00.583883 | orchestrator | 2026-04-11 02:25:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:00.584036 | orchestrator | 2026-04-11 02:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:03.633226 | orchestrator | 2026-04-11 02:25:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:03.634925 | orchestrator | 2026-04-11 02:25:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:03.634954 | orchestrator | 2026-04-11 02:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:06.685357 | orchestrator | 2026-04-11 02:25:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:06.688400 | orchestrator | 2026-04-11 02:25:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:06.688466 | orchestrator | 2026-04-11 02:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:09.737384 | orchestrator | 2026-04-11 02:25:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:09.740106 | orchestrator | 2026-04-11 02:25:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:09.740168 | orchestrator | 2026-04-11 02:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:12.786250 | orchestrator | 2026-04-11 02:25:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:12.788441 | orchestrator | 2026-04-11 02:25:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:12.788499 | orchestrator | 2026-04-11 02:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:15.837371 | orchestrator | 2026-04-11 02:25:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:15.838986 | orchestrator | 2026-04-11 02:25:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:15.839082 | orchestrator | 2026-04-11 02:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:18.883073 | orchestrator | 2026-04-11 02:25:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:18.884712 | orchestrator | 2026-04-11 02:25:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:18.884784 | orchestrator | 2026-04-11 02:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:21.935750 | orchestrator | 2026-04-11 02:25:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:21.938476 | orchestrator | 2026-04-11 02:25:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:21.938561 | orchestrator | 2026-04-11 02:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:24.988690 | orchestrator | 2026-04-11 02:25:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:24.991955 | orchestrator | 2026-04-11 02:25:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:24.992028 | orchestrator | 2026-04-11 02:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:28.041583 | orchestrator | 2026-04-11 02:25:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:28.044823 | orchestrator | 2026-04-11 02:25:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:28.045464 | orchestrator | 2026-04-11 02:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:31.102349 | orchestrator | 2026-04-11 02:25:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:31.104017 | orchestrator | 2026-04-11 02:25:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:31.104073 | orchestrator | 2026-04-11 02:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:34.144010 | orchestrator | 2026-04-11 02:25:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:34.145977 | orchestrator | 2026-04-11 02:25:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:34.146059 | orchestrator | 2026-04-11 02:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:37.199450 | orchestrator | 2026-04-11 02:25:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:37.201655 | orchestrator | 2026-04-11 02:25:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:37.201709 | orchestrator | 2026-04-11 02:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:40.254659 | orchestrator | 2026-04-11 02:25:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:40.255604 | orchestrator | 2026-04-11 02:25:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:40.255644 | orchestrator | 2026-04-11 02:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:43.299502 | orchestrator | 2026-04-11 02:25:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:43.300622 | orchestrator | 2026-04-11 02:25:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:43.300661 | orchestrator | 2026-04-11 02:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:46.350815 | orchestrator | 2026-04-11 02:25:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:46.352414 | orchestrator | 2026-04-11 02:25:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:46.352443 | orchestrator | 2026-04-11 02:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:49.401546 | orchestrator | 2026-04-11 02:25:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:49.403679 | orchestrator | 2026-04-11 02:25:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:49.403739 | orchestrator | 2026-04-11 02:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:52.456436 | orchestrator | 2026-04-11 02:25:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:52.458247 | orchestrator | 2026-04-11 02:25:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:52.458296 | orchestrator | 2026-04-11 02:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:55.511667 | orchestrator | 2026-04-11 02:25:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:55.513266 | orchestrator | 2026-04-11 02:25:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:55.513372 | orchestrator | 2026-04-11 02:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:25:58.557820 | orchestrator | 2026-04-11 02:25:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:25:58.559121 | orchestrator | 2026-04-11 02:25:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:25:58.559176 | orchestrator | 2026-04-11 02:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:01.602105 | orchestrator | 2026-04-11 02:26:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:01.603955 | orchestrator | 2026-04-11 02:26:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:01.604016 | orchestrator | 2026-04-11 02:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:04.652261 | orchestrator | 2026-04-11 02:26:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:04.654924 | orchestrator | 2026-04-11 02:26:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:04.655035 | orchestrator | 2026-04-11 02:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:07.704267 | orchestrator | 2026-04-11 02:26:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:07.705981 | orchestrator | 2026-04-11 02:26:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:07.706096 | orchestrator | 2026-04-11 02:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:10.755995 | orchestrator | 2026-04-11 02:26:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:10.757286 | orchestrator | 2026-04-11 02:26:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:10.757341 | orchestrator | 2026-04-11 02:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:13.805405 | orchestrator | 2026-04-11 02:26:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:13.806299 | orchestrator | 2026-04-11 02:26:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:13.806539 | orchestrator | 2026-04-11 02:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:16.863052 | orchestrator | 2026-04-11 02:26:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:16.865842 | orchestrator | 2026-04-11 02:26:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:16.865930 | orchestrator | 2026-04-11 02:26:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:19.910761 | orchestrator | 2026-04-11 02:26:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:19.912111 | orchestrator | 2026-04-11 02:26:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:19.912322 | orchestrator | 2026-04-11 02:26:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:22.958602 | orchestrator | 2026-04-11 02:26:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:22.960448 | orchestrator | 2026-04-11 02:26:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:22.960502 | orchestrator | 2026-04-11 02:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:26.013208 | orchestrator | 2026-04-11 02:26:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:26.015948 | orchestrator | 2026-04-11 02:26:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:26.016043 | orchestrator | 2026-04-11 02:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:29.059947 | orchestrator | 2026-04-11 02:26:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:29.060509 | orchestrator | 2026-04-11 02:26:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:29.060629 | orchestrator | 2026-04-11 02:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:32.106588 | orchestrator | 2026-04-11 02:26:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:32.108769 | orchestrator | 2026-04-11 02:26:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:32.108900 | orchestrator | 2026-04-11 02:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:35.159158 | orchestrator | 2026-04-11 02:26:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:35.160665 | orchestrator | 2026-04-11 02:26:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:35.160714 | orchestrator | 2026-04-11 02:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:38.211995 | orchestrator | 2026-04-11 02:26:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:38.213878 | orchestrator | 2026-04-11 02:26:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:38.213949 | orchestrator | 2026-04-11 02:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:41.270105 | orchestrator | 2026-04-11 02:26:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:41.271872 | orchestrator | 2026-04-11 02:26:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:41.272137 | orchestrator | 2026-04-11 02:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:44.322810 | orchestrator | 2026-04-11 02:26:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:44.324487 | orchestrator | 2026-04-11 02:26:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:44.324557 | orchestrator | 2026-04-11 02:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:47.373219 | orchestrator | 2026-04-11 02:26:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:47.374239 | orchestrator | 2026-04-11 02:26:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:47.374273 | orchestrator | 2026-04-11 02:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:50.418556 | orchestrator | 2026-04-11 02:26:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:50.419883 | orchestrator | 2026-04-11 02:26:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:50.419999 | orchestrator | 2026-04-11 02:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:53.468679 | orchestrator | 2026-04-11 02:26:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:53.470272 | orchestrator | 2026-04-11 02:26:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:53.470530 | orchestrator | 2026-04-11 02:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:56.523357 | orchestrator | 2026-04-11 02:26:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:56.525395 | orchestrator | 2026-04-11 02:26:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:56.525935 | orchestrator | 2026-04-11 02:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:26:59.572770 | orchestrator | 2026-04-11 02:26:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:26:59.574788 | orchestrator | 2026-04-11 02:26:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:26:59.574956 | orchestrator | 2026-04-11 02:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:02.624505 | orchestrator | 2026-04-11 02:27:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:02.626723 | orchestrator | 2026-04-11 02:27:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:02.626761 | orchestrator | 2026-04-11 02:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:05.675737 | orchestrator | 2026-04-11 02:27:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:05.676604 | orchestrator | 2026-04-11 02:27:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:05.676642 | orchestrator | 2026-04-11 02:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:08.724080 | orchestrator | 2026-04-11 02:27:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:08.725283 | orchestrator | 2026-04-11 02:27:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:08.725392 | orchestrator | 2026-04-11 02:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:11.768890 | orchestrator | 2026-04-11 02:27:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:11.770164 | orchestrator | 2026-04-11 02:27:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:11.770201 | orchestrator | 2026-04-11 02:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:14.808287 | orchestrator | 2026-04-11 02:27:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:14.810544 | orchestrator | 2026-04-11 02:27:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:14.810608 | orchestrator | 2026-04-11 02:27:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:17.861349 | orchestrator | 2026-04-11 02:27:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:17.862984 | orchestrator | 2026-04-11 02:27:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:17.863113 | orchestrator | 2026-04-11 02:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:20.911052 | orchestrator | 2026-04-11 02:27:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:20.912181 | orchestrator | 2026-04-11 02:27:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:20.912239 | orchestrator | 2026-04-11 02:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:23.948243 | orchestrator | 2026-04-11 02:27:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:23.950140 | orchestrator | 2026-04-11 02:27:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:23.950199 | orchestrator | 2026-04-11 02:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:26.992019 | orchestrator | 2026-04-11 02:27:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:26.993804 | orchestrator | 2026-04-11 02:27:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:26.994331 | orchestrator | 2026-04-11 02:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:30.045120 | orchestrator | 2026-04-11 02:27:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:30.047963 | orchestrator | 2026-04-11 02:27:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:30.048053 | orchestrator | 2026-04-11 02:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:33.093249 | orchestrator | 2026-04-11 02:27:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:33.094267 | orchestrator | 2026-04-11 02:27:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:33.094309 | orchestrator | 2026-04-11 02:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:36.140880 | orchestrator | 2026-04-11 02:27:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:36.142271 | orchestrator | 2026-04-11 02:27:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:36.142316 | orchestrator | 2026-04-11 02:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:39.185599 | orchestrator | 2026-04-11 02:27:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:39.186329 | orchestrator | 2026-04-11 02:27:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:39.186358 | orchestrator | 2026-04-11 02:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:42.234360 | orchestrator | 2026-04-11 02:27:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:42.236221 | orchestrator | 2026-04-11 02:27:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:42.236381 | orchestrator | 2026-04-11 02:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:45.278950 | orchestrator | 2026-04-11 02:27:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:45.280589 | orchestrator | 2026-04-11 02:27:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:45.280640 | orchestrator | 2026-04-11 02:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:48.323235 | orchestrator | 2026-04-11 02:27:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:48.323466 | orchestrator | 2026-04-11 02:27:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:48.323489 | orchestrator | 2026-04-11 02:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:51.363760 | orchestrator | 2026-04-11 02:27:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:51.365960 | orchestrator | 2026-04-11 02:27:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:51.366009 | orchestrator | 2026-04-11 02:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:54.405863 | orchestrator | 2026-04-11 02:27:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:54.407306 | orchestrator | 2026-04-11 02:27:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:54.407382 | orchestrator | 2026-04-11 02:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:27:57.446578 | orchestrator | 2026-04-11 02:27:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:27:57.448502 | orchestrator | 2026-04-11 02:27:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:27:57.448584 | orchestrator | 2026-04-11 02:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:00.495071 | orchestrator | 2026-04-11 02:28:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:00.495682 | orchestrator | 2026-04-11 02:28:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:00.495727 | orchestrator | 2026-04-11 02:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:03.548403 | orchestrator | 2026-04-11 02:28:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:03.549655 | orchestrator | 2026-04-11 02:28:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:03.549688 | orchestrator | 2026-04-11 02:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:06.595691 | orchestrator | 2026-04-11 02:28:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:06.598702 | orchestrator | 2026-04-11 02:28:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:06.598987 | orchestrator | 2026-04-11 02:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:09.644909 | orchestrator | 2026-04-11 02:28:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:09.646298 | orchestrator | 2026-04-11 02:28:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:09.646336 | orchestrator | 2026-04-11 02:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:12.692065 | orchestrator | 2026-04-11 02:28:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:12.693608 | orchestrator | 2026-04-11 02:28:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:12.693672 | orchestrator | 2026-04-11 02:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:15.732965 | orchestrator | 2026-04-11 02:28:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:15.734510 | orchestrator | 2026-04-11 02:28:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:15.734576 | orchestrator | 2026-04-11 02:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:18.778688 | orchestrator | 2026-04-11 02:28:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:18.780092 | orchestrator | 2026-04-11 02:28:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:18.780172 | orchestrator | 2026-04-11 02:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:21.833162 | orchestrator | 2026-04-11 02:28:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:21.835196 | orchestrator | 2026-04-11 02:28:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:21.835268 | orchestrator | 2026-04-11 02:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:24.889491 | orchestrator | 2026-04-11 02:28:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:24.891911 | orchestrator | 2026-04-11 02:28:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:24.891985 | orchestrator | 2026-04-11 02:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:27.949655 | orchestrator | 2026-04-11 02:28:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:27.952497 | orchestrator | 2026-04-11 02:28:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:27.952615 | orchestrator | 2026-04-11 02:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:31.025544 | orchestrator | 2026-04-11 02:28:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:31.027661 | orchestrator | 2026-04-11 02:28:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:31.027726 | orchestrator | 2026-04-11 02:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:34.074694 | orchestrator | 2026-04-11 02:28:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:34.076639 | orchestrator | 2026-04-11 02:28:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:34.076723 | orchestrator | 2026-04-11 02:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:37.126080 | orchestrator | 2026-04-11 02:28:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:37.129440 | orchestrator | 2026-04-11 02:28:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:37.129498 | orchestrator | 2026-04-11 02:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:40.181773 | orchestrator | 2026-04-11 02:28:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:40.184251 | orchestrator | 2026-04-11 02:28:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:40.184304 | orchestrator | 2026-04-11 02:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:43.236283 | orchestrator | 2026-04-11 02:28:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:43.238156 | orchestrator | 2026-04-11 02:28:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:43.238221 | orchestrator | 2026-04-11 02:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:46.293117 | orchestrator | 2026-04-11 02:28:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:46.294237 | orchestrator | 2026-04-11 02:28:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:46.294352 | orchestrator | 2026-04-11 02:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:49.347110 | orchestrator | 2026-04-11 02:28:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:49.350229 | orchestrator | 2026-04-11 02:28:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:49.350443 | orchestrator | 2026-04-11 02:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:52.406743 | orchestrator | 2026-04-11 02:28:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:52.408692 | orchestrator | 2026-04-11 02:28:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:52.408926 | orchestrator | 2026-04-11 02:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:55.456965 | orchestrator | 2026-04-11 02:28:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:55.457647 | orchestrator | 2026-04-11 02:28:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:55.457684 | orchestrator | 2026-04-11 02:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:28:58.498924 | orchestrator | 2026-04-11 02:28:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:28:58.500916 | orchestrator | 2026-04-11 02:28:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:28:58.501143 | orchestrator | 2026-04-11 02:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:01.555205 | orchestrator | 2026-04-11 02:29:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:01.557342 | orchestrator | 2026-04-11 02:29:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:01.557390 | orchestrator | 2026-04-11 02:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:04.600798 | orchestrator | 2026-04-11 02:29:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:04.602293 | orchestrator | 2026-04-11 02:29:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:04.602337 | orchestrator | 2026-04-11 02:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:07.649047 | orchestrator | 2026-04-11 02:29:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:07.649457 | orchestrator | 2026-04-11 02:29:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:07.649493 | orchestrator | 2026-04-11 02:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:10.698738 | orchestrator | 2026-04-11 02:29:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:10.700257 | orchestrator | 2026-04-11 02:29:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:10.700471 | orchestrator | 2026-04-11 02:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:13.751706 | orchestrator | 2026-04-11 02:29:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:13.753615 | orchestrator | 2026-04-11 02:29:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:13.753663 | orchestrator | 2026-04-11 02:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:16.804400 | orchestrator | 2026-04-11 02:29:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:16.806575 | orchestrator | 2026-04-11 02:29:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:16.806678 | orchestrator | 2026-04-11 02:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:19.850750 | orchestrator | 2026-04-11 02:29:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:19.853146 | orchestrator | 2026-04-11 02:29:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:19.853201 | orchestrator | 2026-04-11 02:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:22.904102 | orchestrator | 2026-04-11 02:29:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:22.905748 | orchestrator | 2026-04-11 02:29:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:22.905813 | orchestrator | 2026-04-11 02:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:25.949007 | orchestrator | 2026-04-11 02:29:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:25.949955 | orchestrator | 2026-04-11 02:29:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:25.950673 | orchestrator | 2026-04-11 02:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:28.996880 | orchestrator | 2026-04-11 02:29:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:29.001247 | orchestrator | 2026-04-11 02:29:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:29.004680 | orchestrator | 2026-04-11 02:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:32.056329 | orchestrator | 2026-04-11 02:29:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:32.058128 | orchestrator | 2026-04-11 02:29:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:32.058199 | orchestrator | 2026-04-11 02:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:35.110344 | orchestrator | 2026-04-11 02:29:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:35.113058 | orchestrator | 2026-04-11 02:29:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:35.113223 | orchestrator | 2026-04-11 02:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:38.159075 | orchestrator | 2026-04-11 02:29:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:38.160508 | orchestrator | 2026-04-11 02:29:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:38.160547 | orchestrator | 2026-04-11 02:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:41.214979 | orchestrator | 2026-04-11 02:29:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:41.216245 | orchestrator | 2026-04-11 02:29:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:41.218829 | orchestrator | 2026-04-11 02:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:44.267660 | orchestrator | 2026-04-11 02:29:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:44.269529 | orchestrator | 2026-04-11 02:29:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:44.269926 | orchestrator | 2026-04-11 02:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:47.329668 | orchestrator | 2026-04-11 02:29:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:47.330717 | orchestrator | 2026-04-11 02:29:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:47.330754 | orchestrator | 2026-04-11 02:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:50.378707 | orchestrator | 2026-04-11 02:29:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:50.380750 | orchestrator | 2026-04-11 02:29:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:50.381499 | orchestrator | 2026-04-11 02:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:53.431154 | orchestrator | 2026-04-11 02:29:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:53.434280 | orchestrator | 2026-04-11 02:29:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:53.434758 | orchestrator | 2026-04-11 02:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:56.482862 | orchestrator | 2026-04-11 02:29:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:56.484270 | orchestrator | 2026-04-11 02:29:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:56.484303 | orchestrator | 2026-04-11 02:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:29:59.533229 | orchestrator | 2026-04-11 02:29:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:29:59.535718 | orchestrator | 2026-04-11 02:29:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:29:59.535838 | orchestrator | 2026-04-11 02:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:02.584065 | orchestrator | 2026-04-11 02:30:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:02.585975 | orchestrator | 2026-04-11 02:30:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:02.586074 | orchestrator | 2026-04-11 02:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:05.633596 | orchestrator | 2026-04-11 02:30:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:05.635373 | orchestrator | 2026-04-11 02:30:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:05.635444 | orchestrator | 2026-04-11 02:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:08.681597 | orchestrator | 2026-04-11 02:30:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:08.682341 | orchestrator | 2026-04-11 02:30:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:08.682385 | orchestrator | 2026-04-11 02:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:11.734805 | orchestrator | 2026-04-11 02:30:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:11.736604 | orchestrator | 2026-04-11 02:30:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:11.736732 | orchestrator | 2026-04-11 02:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:14.789922 | orchestrator | 2026-04-11 02:30:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:14.791337 | orchestrator | 2026-04-11 02:30:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:14.791513 | orchestrator | 2026-04-11 02:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:17.845344 | orchestrator | 2026-04-11 02:30:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:17.847657 | orchestrator | 2026-04-11 02:30:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:17.847704 | orchestrator | 2026-04-11 02:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:20.892183 | orchestrator | 2026-04-11 02:30:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:20.894485 | orchestrator | 2026-04-11 02:30:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:20.894548 | orchestrator | 2026-04-11 02:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:23.941512 | orchestrator | 2026-04-11 02:30:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:23.943216 | orchestrator | 2026-04-11 02:30:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:23.943279 | orchestrator | 2026-04-11 02:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:26.992233 | orchestrator | 2026-04-11 02:30:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:26.993315 | orchestrator | 2026-04-11 02:30:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:26.993367 | orchestrator | 2026-04-11 02:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:30.041090 | orchestrator | 2026-04-11 02:30:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:30.043332 | orchestrator | 2026-04-11 02:30:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:30.043573 | orchestrator | 2026-04-11 02:30:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:33.087814 | orchestrator | 2026-04-11 02:30:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:33.090138 | orchestrator | 2026-04-11 02:30:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:33.090240 | orchestrator | 2026-04-11 02:30:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:36.137919 | orchestrator | 2026-04-11 02:30:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:36.139916 | orchestrator | 2026-04-11 02:30:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:36.140020 | orchestrator | 2026-04-11 02:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:39.190712 | orchestrator | 2026-04-11 02:30:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:39.192788 | orchestrator | 2026-04-11 02:30:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:39.192863 | orchestrator | 2026-04-11 02:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:42.245673 | orchestrator | 2026-04-11 02:30:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:42.249467 | orchestrator | 2026-04-11 02:30:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:42.249584 | orchestrator | 2026-04-11 02:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:45.298069 | orchestrator | 2026-04-11 02:30:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:45.300711 | orchestrator | 2026-04-11 02:30:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:45.300821 | orchestrator | 2026-04-11 02:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:48.351716 | orchestrator | 2026-04-11 02:30:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:48.353985 | orchestrator | 2026-04-11 02:30:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:48.354079 | orchestrator | 2026-04-11 02:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:51.404059 | orchestrator | 2026-04-11 02:30:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:51.405649 | orchestrator | 2026-04-11 02:30:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:51.406008 | orchestrator | 2026-04-11 02:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:54.453182 | orchestrator | 2026-04-11 02:30:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:54.453562 | orchestrator | 2026-04-11 02:30:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:54.453592 | orchestrator | 2026-04-11 02:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:30:57.496682 | orchestrator | 2026-04-11 02:30:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:30:57.498635 | orchestrator | 2026-04-11 02:30:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:30:57.499006 | orchestrator | 2026-04-11 02:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:00.548244 | orchestrator | 2026-04-11 02:31:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:00.549568 | orchestrator | 2026-04-11 02:31:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:00.549626 | orchestrator | 2026-04-11 02:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:03.609246 | orchestrator | 2026-04-11 02:31:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:03.611054 | orchestrator | 2026-04-11 02:31:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:03.611227 | orchestrator | 2026-04-11 02:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:06.664942 | orchestrator | 2026-04-11 02:31:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:06.667392 | orchestrator | 2026-04-11 02:31:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:06.667461 | orchestrator | 2026-04-11 02:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:09.717941 | orchestrator | 2026-04-11 02:31:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:09.718951 | orchestrator | 2026-04-11 02:31:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:09.719099 | orchestrator | 2026-04-11 02:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:12.762070 | orchestrator | 2026-04-11 02:31:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:12.764078 | orchestrator | 2026-04-11 02:31:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:12.764179 | orchestrator | 2026-04-11 02:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:15.808417 | orchestrator | 2026-04-11 02:31:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:15.811052 | orchestrator | 2026-04-11 02:31:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:15.811152 | orchestrator | 2026-04-11 02:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:18.859237 | orchestrator | 2026-04-11 02:31:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:18.860458 | orchestrator | 2026-04-11 02:31:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:18.860517 | orchestrator | 2026-04-11 02:31:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:21.907239 | orchestrator | 2026-04-11 02:31:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:21.908883 | orchestrator | 2026-04-11 02:31:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:21.908958 | orchestrator | 2026-04-11 02:31:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:24.954915 | orchestrator | 2026-04-11 02:31:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:24.957512 | orchestrator | 2026-04-11 02:31:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:24.957588 | orchestrator | 2026-04-11 02:31:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:28.003439 | orchestrator | 2026-04-11 02:31:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:28.004926 | orchestrator | 2026-04-11 02:31:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:28.005038 | orchestrator | 2026-04-11 02:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:31.051147 | orchestrator | 2026-04-11 02:31:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:31.052099 | orchestrator | 2026-04-11 02:31:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:31.052143 | orchestrator | 2026-04-11 02:31:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:34.101213 | orchestrator | 2026-04-11 02:31:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:34.103921 | orchestrator | 2026-04-11 02:31:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:34.104000 | orchestrator | 2026-04-11 02:31:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:37.158191 | orchestrator | 2026-04-11 02:31:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:37.161211 | orchestrator | 2026-04-11 02:31:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:37.161299 | orchestrator | 2026-04-11 02:31:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:40.209373 | orchestrator | 2026-04-11 02:31:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:40.210620 | orchestrator | 2026-04-11 02:31:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:40.210684 | orchestrator | 2026-04-11 02:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:43.263286 | orchestrator | 2026-04-11 02:31:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:43.265865 | orchestrator | 2026-04-11 02:31:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:43.265960 | orchestrator | 2026-04-11 02:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:46.317157 | orchestrator | 2026-04-11 02:31:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:46.318612 | orchestrator | 2026-04-11 02:31:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:46.318678 | orchestrator | 2026-04-11 02:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:49.378455 | orchestrator | 2026-04-11 02:31:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:49.382496 | orchestrator | 2026-04-11 02:31:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:49.382549 | orchestrator | 2026-04-11 02:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:52.436987 | orchestrator | 2026-04-11 02:31:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:52.437927 | orchestrator | 2026-04-11 02:31:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:52.437973 | orchestrator | 2026-04-11 02:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:55.491577 | orchestrator | 2026-04-11 02:31:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:55.492959 | orchestrator | 2026-04-11 02:31:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:55.493033 | orchestrator | 2026-04-11 02:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:31:58.538596 | orchestrator | 2026-04-11 02:31:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:31:58.540481 | orchestrator | 2026-04-11 02:31:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:31:58.540535 | orchestrator | 2026-04-11 02:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:01.593254 | orchestrator | 2026-04-11 02:32:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:01.595447 | orchestrator | 2026-04-11 02:32:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:01.595492 | orchestrator | 2026-04-11 02:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:04.641017 | orchestrator | 2026-04-11 02:32:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:04.641934 | orchestrator | 2026-04-11 02:32:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:04.642002 | orchestrator | 2026-04-11 02:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:07.702077 | orchestrator | 2026-04-11 02:32:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:07.703754 | orchestrator | 2026-04-11 02:32:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:07.703816 | orchestrator | 2026-04-11 02:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:10.752853 | orchestrator | 2026-04-11 02:32:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:10.754098 | orchestrator | 2026-04-11 02:32:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:10.754232 | orchestrator | 2026-04-11 02:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:13.804207 | orchestrator | 2026-04-11 02:32:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:13.805316 | orchestrator | 2026-04-11 02:32:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:13.805378 | orchestrator | 2026-04-11 02:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:16.853879 | orchestrator | 2026-04-11 02:32:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:16.854892 | orchestrator | 2026-04-11 02:32:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:16.855097 | orchestrator | 2026-04-11 02:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:19.903022 | orchestrator | 2026-04-11 02:32:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:19.904304 | orchestrator | 2026-04-11 02:32:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:19.904360 | orchestrator | 2026-04-11 02:32:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:22.959135 | orchestrator | 2026-04-11 02:32:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:22.961443 | orchestrator | 2026-04-11 02:32:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:22.961500 | orchestrator | 2026-04-11 02:32:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:26.008593 | orchestrator | 2026-04-11 02:32:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:26.013362 | orchestrator | 2026-04-11 02:32:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:26.013439 | orchestrator | 2026-04-11 02:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:29.059859 | orchestrator | 2026-04-11 02:32:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:29.062596 | orchestrator | 2026-04-11 02:32:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:29.062945 | orchestrator | 2026-04-11 02:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:32.117342 | orchestrator | 2026-04-11 02:32:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:32.119797 | orchestrator | 2026-04-11 02:32:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:32.119909 | orchestrator | 2026-04-11 02:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:35.174465 | orchestrator | 2026-04-11 02:32:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:35.175729 | orchestrator | 2026-04-11 02:32:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:35.175773 | orchestrator | 2026-04-11 02:32:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:38.226483 | orchestrator | 2026-04-11 02:32:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:38.229122 | orchestrator | 2026-04-11 02:32:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:38.229223 | orchestrator | 2026-04-11 02:32:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:41.275869 | orchestrator | 2026-04-11 02:32:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:41.278268 | orchestrator | 2026-04-11 02:32:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:41.278313 | orchestrator | 2026-04-11 02:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:44.331336 | orchestrator | 2026-04-11 02:32:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:44.332129 | orchestrator | 2026-04-11 02:32:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:44.332159 | orchestrator | 2026-04-11 02:32:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:47.380082 | orchestrator | 2026-04-11 02:32:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:47.382566 | orchestrator | 2026-04-11 02:32:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:47.382622 | orchestrator | 2026-04-11 02:32:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:50.431608 | orchestrator | 2026-04-11 02:32:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:50.434000 | orchestrator | 2026-04-11 02:32:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:50.434112 | orchestrator | 2026-04-11 02:32:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:53.485852 | orchestrator | 2026-04-11 02:32:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:53.487362 | orchestrator | 2026-04-11 02:32:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:53.487503 | orchestrator | 2026-04-11 02:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:56.534476 | orchestrator | 2026-04-11 02:32:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:56.537176 | orchestrator | 2026-04-11 02:32:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:56.537393 | orchestrator | 2026-04-11 02:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:32:59.591375 | orchestrator | 2026-04-11 02:32:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:32:59.593162 | orchestrator | 2026-04-11 02:32:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:32:59.593238 | orchestrator | 2026-04-11 02:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:02.639290 | orchestrator | 2026-04-11 02:33:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:02.641272 | orchestrator | 2026-04-11 02:33:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:02.641336 | orchestrator | 2026-04-11 02:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:05.692562 | orchestrator | 2026-04-11 02:33:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:05.693963 | orchestrator | 2026-04-11 02:33:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:05.694011 | orchestrator | 2026-04-11 02:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:08.742130 | orchestrator | 2026-04-11 02:33:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:08.744345 | orchestrator | 2026-04-11 02:33:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:08.744417 | orchestrator | 2026-04-11 02:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:11.785912 | orchestrator | 2026-04-11 02:33:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:11.787673 | orchestrator | 2026-04-11 02:33:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:11.787870 | orchestrator | 2026-04-11 02:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:14.838383 | orchestrator | 2026-04-11 02:33:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:14.841232 | orchestrator | 2026-04-11 02:33:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:14.841357 | orchestrator | 2026-04-11 02:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:17.888733 | orchestrator | 2026-04-11 02:33:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:17.890337 | orchestrator | 2026-04-11 02:33:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:17.890366 | orchestrator | 2026-04-11 02:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:20.940993 | orchestrator | 2026-04-11 02:33:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:20.942931 | orchestrator | 2026-04-11 02:33:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:20.943026 | orchestrator | 2026-04-11 02:33:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:23.995441 | orchestrator | 2026-04-11 02:33:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:23.998484 | orchestrator | 2026-04-11 02:33:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:23.998552 | orchestrator | 2026-04-11 02:33:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:27.050107 | orchestrator | 2026-04-11 02:33:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:27.052241 | orchestrator | 2026-04-11 02:33:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:27.052297 | orchestrator | 2026-04-11 02:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:30.099825 | orchestrator | 2026-04-11 02:33:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:30.101494 | orchestrator | 2026-04-11 02:33:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:30.101711 | orchestrator | 2026-04-11 02:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:33.150303 | orchestrator | 2026-04-11 02:33:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:33.152348 | orchestrator | 2026-04-11 02:33:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:33.152451 | orchestrator | 2026-04-11 02:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:36.198152 | orchestrator | 2026-04-11 02:33:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:36.199802 | orchestrator | 2026-04-11 02:33:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:36.199875 | orchestrator | 2026-04-11 02:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:39.244604 | orchestrator | 2026-04-11 02:33:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:39.248479 | orchestrator | 2026-04-11 02:33:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:39.248541 | orchestrator | 2026-04-11 02:33:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:42.301269 | orchestrator | 2026-04-11 02:33:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:42.303931 | orchestrator | 2026-04-11 02:33:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:42.304095 | orchestrator | 2026-04-11 02:33:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:45.361023 | orchestrator | 2026-04-11 02:33:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:45.363530 | orchestrator | 2026-04-11 02:33:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:45.364131 | orchestrator | 2026-04-11 02:33:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:48.411048 | orchestrator | 2026-04-11 02:33:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:48.412386 | orchestrator | 2026-04-11 02:33:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:48.412463 | orchestrator | 2026-04-11 02:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:51.451599 | orchestrator | 2026-04-11 02:33:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:51.453476 | orchestrator | 2026-04-11 02:33:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:51.453594 | orchestrator | 2026-04-11 02:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:54.497590 | orchestrator | 2026-04-11 02:33:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:54.499861 | orchestrator | 2026-04-11 02:33:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:54.499947 | orchestrator | 2026-04-11 02:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:33:57.544788 | orchestrator | 2026-04-11 02:33:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:33:57.545901 | orchestrator | 2026-04-11 02:33:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:33:57.545936 | orchestrator | 2026-04-11 02:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:00.596326 | orchestrator | 2026-04-11 02:34:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:00.598215 | orchestrator | 2026-04-11 02:34:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:00.598272 | orchestrator | 2026-04-11 02:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:03.642855 | orchestrator | 2026-04-11 02:34:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:03.647221 | orchestrator | 2026-04-11 02:34:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:03.647287 | orchestrator | 2026-04-11 02:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:06.700585 | orchestrator | 2026-04-11 02:34:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:06.701869 | orchestrator | 2026-04-11 02:34:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:06.701905 | orchestrator | 2026-04-11 02:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:09.755296 | orchestrator | 2026-04-11 02:34:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:09.758936 | orchestrator | 2026-04-11 02:34:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:09.759104 | orchestrator | 2026-04-11 02:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:12.813434 | orchestrator | 2026-04-11 02:34:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:12.816002 | orchestrator | 2026-04-11 02:34:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:12.816079 | orchestrator | 2026-04-11 02:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:15.865700 | orchestrator | 2026-04-11 02:34:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:15.868430 | orchestrator | 2026-04-11 02:34:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:15.868479 | orchestrator | 2026-04-11 02:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:18.914716 | orchestrator | 2026-04-11 02:34:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:18.916331 | orchestrator | 2026-04-11 02:34:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:18.916388 | orchestrator | 2026-04-11 02:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:21.966550 | orchestrator | 2026-04-11 02:34:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:21.967818 | orchestrator | 2026-04-11 02:34:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:21.967859 | orchestrator | 2026-04-11 02:34:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:25.019240 | orchestrator | 2026-04-11 02:34:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:25.021496 | orchestrator | 2026-04-11 02:34:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:25.021619 | orchestrator | 2026-04-11 02:34:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:28.067506 | orchestrator | 2026-04-11 02:34:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:28.070115 | orchestrator | 2026-04-11 02:34:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:28.070198 | orchestrator | 2026-04-11 02:34:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:31.118241 | orchestrator | 2026-04-11 02:34:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:31.120086 | orchestrator | 2026-04-11 02:34:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:31.120349 | orchestrator | 2026-04-11 02:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:34.176177 | orchestrator | 2026-04-11 02:34:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:34.177453 | orchestrator | 2026-04-11 02:34:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:34.177680 | orchestrator | 2026-04-11 02:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:37.229011 | orchestrator | 2026-04-11 02:34:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:37.230527 | orchestrator | 2026-04-11 02:34:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:37.230652 | orchestrator | 2026-04-11 02:34:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:40.292479 | orchestrator | 2026-04-11 02:34:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:40.295614 | orchestrator | 2026-04-11 02:34:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:40.295667 | orchestrator | 2026-04-11 02:34:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:43.353530 | orchestrator | 2026-04-11 02:34:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:43.357833 | orchestrator | 2026-04-11 02:34:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:43.357921 | orchestrator | 2026-04-11 02:34:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:46.409549 | orchestrator | 2026-04-11 02:34:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:46.412822 | orchestrator | 2026-04-11 02:34:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:46.412927 | orchestrator | 2026-04-11 02:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:49.461037 | orchestrator | 2026-04-11 02:34:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:49.464872 | orchestrator | 2026-04-11 02:34:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:49.464955 | orchestrator | 2026-04-11 02:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:52.513904 | orchestrator | 2026-04-11 02:34:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:52.516478 | orchestrator | 2026-04-11 02:34:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:52.516576 | orchestrator | 2026-04-11 02:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:55.579089 | orchestrator | 2026-04-11 02:34:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:55.582458 | orchestrator | 2026-04-11 02:34:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:55.582541 | orchestrator | 2026-04-11 02:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:34:58.630681 | orchestrator | 2026-04-11 02:34:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:34:58.634206 | orchestrator | 2026-04-11 02:34:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:34:58.634352 | orchestrator | 2026-04-11 02:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:01.681414 | orchestrator | 2026-04-11 02:35:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:01.683332 | orchestrator | 2026-04-11 02:35:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:01.683429 | orchestrator | 2026-04-11 02:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:04.724190 | orchestrator | 2026-04-11 02:35:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:04.725816 | orchestrator | 2026-04-11 02:35:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:04.725916 | orchestrator | 2026-04-11 02:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:07.777357 | orchestrator | 2026-04-11 02:35:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:07.779588 | orchestrator | 2026-04-11 02:35:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:07.779645 | orchestrator | 2026-04-11 02:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:10.830912 | orchestrator | 2026-04-11 02:35:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:10.833861 | orchestrator | 2026-04-11 02:35:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:10.833924 | orchestrator | 2026-04-11 02:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:13.880396 | orchestrator | 2026-04-11 02:35:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:13.881335 | orchestrator | 2026-04-11 02:35:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:13.881429 | orchestrator | 2026-04-11 02:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:16.925184 | orchestrator | 2026-04-11 02:35:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:16.927646 | orchestrator | 2026-04-11 02:35:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:16.927730 | orchestrator | 2026-04-11 02:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:19.969707 | orchestrator | 2026-04-11 02:35:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:19.971058 | orchestrator | 2026-04-11 02:35:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:19.971088 | orchestrator | 2026-04-11 02:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:23.017898 | orchestrator | 2026-04-11 02:35:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:23.020168 | orchestrator | 2026-04-11 02:35:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:23.020279 | orchestrator | 2026-04-11 02:35:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:26.067457 | orchestrator | 2026-04-11 02:35:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:26.068459 | orchestrator | 2026-04-11 02:35:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:26.068540 | orchestrator | 2026-04-11 02:35:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:29.115666 | orchestrator | 2026-04-11 02:35:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:29.117075 | orchestrator | 2026-04-11 02:35:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:29.117108 | orchestrator | 2026-04-11 02:35:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:32.162345 | orchestrator | 2026-04-11 02:35:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:32.164723 | orchestrator | 2026-04-11 02:35:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:32.164825 | orchestrator | 2026-04-11 02:35:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:35.222320 | orchestrator | 2026-04-11 02:35:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:35.224682 | orchestrator | 2026-04-11 02:35:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:35.224786 | orchestrator | 2026-04-11 02:35:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:38.270782 | orchestrator | 2026-04-11 02:35:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:38.273216 | orchestrator | 2026-04-11 02:35:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:38.273280 | orchestrator | 2026-04-11 02:35:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:41.332677 | orchestrator | 2026-04-11 02:35:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:41.334068 | orchestrator | 2026-04-11 02:35:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:41.334123 | orchestrator | 2026-04-11 02:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:44.387926 | orchestrator | 2026-04-11 02:35:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:44.389938 | orchestrator | 2026-04-11 02:35:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:44.389998 | orchestrator | 2026-04-11 02:35:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:47.438114 | orchestrator | 2026-04-11 02:35:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:47.441122 | orchestrator | 2026-04-11 02:35:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:47.441299 | orchestrator | 2026-04-11 02:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:50.480799 | orchestrator | 2026-04-11 02:35:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:50.483014 | orchestrator | 2026-04-11 02:35:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:50.483075 | orchestrator | 2026-04-11 02:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:53.532150 | orchestrator | 2026-04-11 02:35:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:53.533971 | orchestrator | 2026-04-11 02:35:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:53.534007 | orchestrator | 2026-04-11 02:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:56.592572 | orchestrator | 2026-04-11 02:35:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:56.595333 | orchestrator | 2026-04-11 02:35:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:56.595387 | orchestrator | 2026-04-11 02:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:35:59.639515 | orchestrator | 2026-04-11 02:35:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:35:59.639977 | orchestrator | 2026-04-11 02:35:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:35:59.640013 | orchestrator | 2026-04-11 02:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:02.681252 | orchestrator | 2026-04-11 02:36:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:02.683377 | orchestrator | 2026-04-11 02:36:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:02.683453 | orchestrator | 2026-04-11 02:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:05.733121 | orchestrator | 2026-04-11 02:36:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:05.734602 | orchestrator | 2026-04-11 02:36:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:05.734936 | orchestrator | 2026-04-11 02:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:08.781463 | orchestrator | 2026-04-11 02:36:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:08.783934 | orchestrator | 2026-04-11 02:36:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:08.784013 | orchestrator | 2026-04-11 02:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:11.835078 | orchestrator | 2026-04-11 02:36:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:11.836250 | orchestrator | 2026-04-11 02:36:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:11.836560 | orchestrator | 2026-04-11 02:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:14.886549 | orchestrator | 2026-04-11 02:36:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:14.889109 | orchestrator | 2026-04-11 02:36:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:14.889183 | orchestrator | 2026-04-11 02:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:17.938276 | orchestrator | 2026-04-11 02:36:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:17.939886 | orchestrator | 2026-04-11 02:36:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:17.939935 | orchestrator | 2026-04-11 02:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:20.991120 | orchestrator | 2026-04-11 02:36:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:20.991853 | orchestrator | 2026-04-11 02:36:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:20.991934 | orchestrator | 2026-04-11 02:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:24.041901 | orchestrator | 2026-04-11 02:36:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:24.047254 | orchestrator | 2026-04-11 02:36:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:24.047355 | orchestrator | 2026-04-11 02:36:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:27.093304 | orchestrator | 2026-04-11 02:36:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:27.096362 | orchestrator | 2026-04-11 02:36:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:27.096557 | orchestrator | 2026-04-11 02:36:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:30.145181 | orchestrator | 2026-04-11 02:36:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:30.146567 | orchestrator | 2026-04-11 02:36:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:30.146601 | orchestrator | 2026-04-11 02:36:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:33.188981 | orchestrator | 2026-04-11 02:36:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:33.190230 | orchestrator | 2026-04-11 02:36:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:33.190278 | orchestrator | 2026-04-11 02:36:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:36.244782 | orchestrator | 2026-04-11 02:36:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:36.245665 | orchestrator | 2026-04-11 02:36:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:36.245703 | orchestrator | 2026-04-11 02:36:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:39.295524 | orchestrator | 2026-04-11 02:36:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:39.296799 | orchestrator | 2026-04-11 02:36:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:39.296932 | orchestrator | 2026-04-11 02:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:42.351703 | orchestrator | 2026-04-11 02:36:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:42.353161 | orchestrator | 2026-04-11 02:36:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:42.353373 | orchestrator | 2026-04-11 02:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:45.397134 | orchestrator | 2026-04-11 02:36:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:45.397532 | orchestrator | 2026-04-11 02:36:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:45.397582 | orchestrator | 2026-04-11 02:36:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:48.443050 | orchestrator | 2026-04-11 02:36:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:48.445180 | orchestrator | 2026-04-11 02:36:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:48.445323 | orchestrator | 2026-04-11 02:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:51.482844 | orchestrator | 2026-04-11 02:36:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:51.483238 | orchestrator | 2026-04-11 02:36:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:51.483266 | orchestrator | 2026-04-11 02:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:54.518868 | orchestrator | 2026-04-11 02:36:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:54.520605 | orchestrator | 2026-04-11 02:36:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:54.520637 | orchestrator | 2026-04-11 02:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:36:57.565964 | orchestrator | 2026-04-11 02:36:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:36:57.567540 | orchestrator | 2026-04-11 02:36:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:36:57.568126 | orchestrator | 2026-04-11 02:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:00.606317 | orchestrator | 2026-04-11 02:37:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:00.607896 | orchestrator | 2026-04-11 02:37:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:00.607942 | orchestrator | 2026-04-11 02:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:03.674491 | orchestrator | 2026-04-11 02:37:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:03.676995 | orchestrator | 2026-04-11 02:37:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:03.677085 | orchestrator | 2026-04-11 02:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:06.714170 | orchestrator | 2026-04-11 02:37:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:06.715552 | orchestrator | 2026-04-11 02:37:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:06.715593 | orchestrator | 2026-04-11 02:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:09.747958 | orchestrator | 2026-04-11 02:37:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:09.749403 | orchestrator | 2026-04-11 02:37:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:09.749489 | orchestrator | 2026-04-11 02:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:12.790895 | orchestrator | 2026-04-11 02:37:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:12.793034 | orchestrator | 2026-04-11 02:37:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:12.793093 | orchestrator | 2026-04-11 02:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:15.833883 | orchestrator | 2026-04-11 02:37:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:15.834274 | orchestrator | 2026-04-11 02:37:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:15.834313 | orchestrator | 2026-04-11 02:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:18.880478 | orchestrator | 2026-04-11 02:37:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:18.883256 | orchestrator | 2026-04-11 02:37:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:18.883326 | orchestrator | 2026-04-11 02:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:21.925281 | orchestrator | 2026-04-11 02:37:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:21.927196 | orchestrator | 2026-04-11 02:37:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:21.927244 | orchestrator | 2026-04-11 02:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:24.966586 | orchestrator | 2026-04-11 02:37:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:24.968153 | orchestrator | 2026-04-11 02:37:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:24.968220 | orchestrator | 2026-04-11 02:37:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:28.007940 | orchestrator | 2026-04-11 02:37:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:28.008892 | orchestrator | 2026-04-11 02:37:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:28.008940 | orchestrator | 2026-04-11 02:37:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:31.061488 | orchestrator | 2026-04-11 02:37:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:31.063652 | orchestrator | 2026-04-11 02:37:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:31.063875 | orchestrator | 2026-04-11 02:37:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:34.104575 | orchestrator | 2026-04-11 02:37:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:34.105210 | orchestrator | 2026-04-11 02:37:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:34.105299 | orchestrator | 2026-04-11 02:37:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:37.147740 | orchestrator | 2026-04-11 02:37:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:37.149028 | orchestrator | 2026-04-11 02:37:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:37.149091 | orchestrator | 2026-04-11 02:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:40.202425 | orchestrator | 2026-04-11 02:37:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:40.204031 | orchestrator | 2026-04-11 02:37:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:40.204115 | orchestrator | 2026-04-11 02:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:43.242272 | orchestrator | 2026-04-11 02:37:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:43.243141 | orchestrator | 2026-04-11 02:37:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:43.243291 | orchestrator | 2026-04-11 02:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:46.295705 | orchestrator | 2026-04-11 02:37:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:46.298227 | orchestrator | 2026-04-11 02:37:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:46.298302 | orchestrator | 2026-04-11 02:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:49.343512 | orchestrator | 2026-04-11 02:37:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:49.344687 | orchestrator | 2026-04-11 02:37:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:49.345008 | orchestrator | 2026-04-11 02:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:52.386260 | orchestrator | 2026-04-11 02:37:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:52.387042 | orchestrator | 2026-04-11 02:37:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:52.387093 | orchestrator | 2026-04-11 02:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:55.430519 | orchestrator | 2026-04-11 02:37:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:55.431195 | orchestrator | 2026-04-11 02:37:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:55.431251 | orchestrator | 2026-04-11 02:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:37:58.479713 | orchestrator | 2026-04-11 02:37:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:37:58.480406 | orchestrator | 2026-04-11 02:37:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:37:58.480557 | orchestrator | 2026-04-11 02:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:01.529063 | orchestrator | 2026-04-11 02:38:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:01.531369 | orchestrator | 2026-04-11 02:38:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:01.531458 | orchestrator | 2026-04-11 02:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:04.573208 | orchestrator | 2026-04-11 02:38:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:04.575522 | orchestrator | 2026-04-11 02:38:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:04.575575 | orchestrator | 2026-04-11 02:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:07.626869 | orchestrator | 2026-04-11 02:38:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:07.628942 | orchestrator | 2026-04-11 02:38:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:07.629003 | orchestrator | 2026-04-11 02:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:10.682165 | orchestrator | 2026-04-11 02:38:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:10.683565 | orchestrator | 2026-04-11 02:38:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:10.683624 | orchestrator | 2026-04-11 02:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:13.734287 | orchestrator | 2026-04-11 02:38:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:13.734917 | orchestrator | 2026-04-11 02:38:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:13.734994 | orchestrator | 2026-04-11 02:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:16.785580 | orchestrator | 2026-04-11 02:38:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:16.788994 | orchestrator | 2026-04-11 02:38:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:16.789058 | orchestrator | 2026-04-11 02:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:19.833997 | orchestrator | 2026-04-11 02:38:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:19.836260 | orchestrator | 2026-04-11 02:38:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:19.836314 | orchestrator | 2026-04-11 02:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:22.879923 | orchestrator | 2026-04-11 02:38:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:22.881682 | orchestrator | 2026-04-11 02:38:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:22.881895 | orchestrator | 2026-04-11 02:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:25.936646 | orchestrator | 2026-04-11 02:38:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:25.938113 | orchestrator | 2026-04-11 02:38:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:25.938175 | orchestrator | 2026-04-11 02:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:28.992485 | orchestrator | 2026-04-11 02:38:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:28.995266 | orchestrator | 2026-04-11 02:38:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:28.995437 | orchestrator | 2026-04-11 02:38:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:32.051557 | orchestrator | 2026-04-11 02:38:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:32.054615 | orchestrator | 2026-04-11 02:38:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:32.055152 | orchestrator | 2026-04-11 02:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:35.111602 | orchestrator | 2026-04-11 02:38:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:35.113049 | orchestrator | 2026-04-11 02:38:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:35.113098 | orchestrator | 2026-04-11 02:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:38.164520 | orchestrator | 2026-04-11 02:38:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:38.167027 | orchestrator | 2026-04-11 02:38:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:38.167083 | orchestrator | 2026-04-11 02:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:41.223276 | orchestrator | 2026-04-11 02:38:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:41.225082 | orchestrator | 2026-04-11 02:38:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:41.225163 | orchestrator | 2026-04-11 02:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:44.278470 | orchestrator | 2026-04-11 02:38:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:44.281366 | orchestrator | 2026-04-11 02:38:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:44.281451 | orchestrator | 2026-04-11 02:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:47.338609 | orchestrator | 2026-04-11 02:38:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:47.340864 | orchestrator | 2026-04-11 02:38:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:47.340927 | orchestrator | 2026-04-11 02:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:50.388011 | orchestrator | 2026-04-11 02:38:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:50.388675 | orchestrator | 2026-04-11 02:38:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:50.388716 | orchestrator | 2026-04-11 02:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:53.436239 | orchestrator | 2026-04-11 02:38:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:53.436640 | orchestrator | 2026-04-11 02:38:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:53.436683 | orchestrator | 2026-04-11 02:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:56.485206 | orchestrator | 2026-04-11 02:38:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:56.487394 | orchestrator | 2026-04-11 02:38:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:56.487447 | orchestrator | 2026-04-11 02:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:38:59.539623 | orchestrator | 2026-04-11 02:38:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:38:59.542203 | orchestrator | 2026-04-11 02:38:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:38:59.542268 | orchestrator | 2026-04-11 02:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:02.590520 | orchestrator | 2026-04-11 02:39:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:02.592627 | orchestrator | 2026-04-11 02:39:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:02.592698 | orchestrator | 2026-04-11 02:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:05.637817 | orchestrator | 2026-04-11 02:39:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:05.639647 | orchestrator | 2026-04-11 02:39:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:05.639783 | orchestrator | 2026-04-11 02:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:08.685199 | orchestrator | 2026-04-11 02:39:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:08.686647 | orchestrator | 2026-04-11 02:39:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:08.686715 | orchestrator | 2026-04-11 02:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:11.728298 | orchestrator | 2026-04-11 02:39:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:11.729863 | orchestrator | 2026-04-11 02:39:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:11.729931 | orchestrator | 2026-04-11 02:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:14.778486 | orchestrator | 2026-04-11 02:39:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:14.780153 | orchestrator | 2026-04-11 02:39:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:14.780274 | orchestrator | 2026-04-11 02:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:17.827210 | orchestrator | 2026-04-11 02:39:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:17.828369 | orchestrator | 2026-04-11 02:39:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:17.828400 | orchestrator | 2026-04-11 02:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:20.868158 | orchestrator | 2026-04-11 02:39:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:20.869711 | orchestrator | 2026-04-11 02:39:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:20.869799 | orchestrator | 2026-04-11 02:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:23.914856 | orchestrator | 2026-04-11 02:39:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:23.916216 | orchestrator | 2026-04-11 02:39:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:23.916249 | orchestrator | 2026-04-11 02:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:26.967280 | orchestrator | 2026-04-11 02:39:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:26.969851 | orchestrator | 2026-04-11 02:39:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:26.969908 | orchestrator | 2026-04-11 02:39:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:30.021996 | orchestrator | 2026-04-11 02:39:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:30.023579 | orchestrator | 2026-04-11 02:39:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:30.023637 | orchestrator | 2026-04-11 02:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:33.074442 | orchestrator | 2026-04-11 02:39:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:33.080379 | orchestrator | 2026-04-11 02:39:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:33.080435 | orchestrator | 2026-04-11 02:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:36.125655 | orchestrator | 2026-04-11 02:39:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:36.127439 | orchestrator | 2026-04-11 02:39:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:36.127525 | orchestrator | 2026-04-11 02:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:39.175014 | orchestrator | 2026-04-11 02:39:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:39.177603 | orchestrator | 2026-04-11 02:39:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:39.177656 | orchestrator | 2026-04-11 02:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:42.230426 | orchestrator | 2026-04-11 02:39:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:42.232498 | orchestrator | 2026-04-11 02:39:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:42.232567 | orchestrator | 2026-04-11 02:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:45.277989 | orchestrator | 2026-04-11 02:39:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:45.280122 | orchestrator | 2026-04-11 02:39:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:45.280250 | orchestrator | 2026-04-11 02:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:48.327657 | orchestrator | 2026-04-11 02:39:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:48.328666 | orchestrator | 2026-04-11 02:39:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:48.328717 | orchestrator | 2026-04-11 02:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:51.377670 | orchestrator | 2026-04-11 02:39:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:51.379297 | orchestrator | 2026-04-11 02:39:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:51.379345 | orchestrator | 2026-04-11 02:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:54.429003 | orchestrator | 2026-04-11 02:39:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:54.430695 | orchestrator | 2026-04-11 02:39:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:54.430753 | orchestrator | 2026-04-11 02:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:39:57.480190 | orchestrator | 2026-04-11 02:39:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:39:57.482337 | orchestrator | 2026-04-11 02:39:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:39:57.482403 | orchestrator | 2026-04-11 02:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:00.531530 | orchestrator | 2026-04-11 02:40:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:00.533225 | orchestrator | 2026-04-11 02:40:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:00.533312 | orchestrator | 2026-04-11 02:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:03.589500 | orchestrator | 2026-04-11 02:40:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:03.591508 | orchestrator | 2026-04-11 02:40:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:03.591590 | orchestrator | 2026-04-11 02:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:06.641699 | orchestrator | 2026-04-11 02:40:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:06.642867 | orchestrator | 2026-04-11 02:40:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:06.643121 | orchestrator | 2026-04-11 02:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:09.691733 | orchestrator | 2026-04-11 02:40:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:09.692868 | orchestrator | 2026-04-11 02:40:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:09.692912 | orchestrator | 2026-04-11 02:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:12.741341 | orchestrator | 2026-04-11 02:40:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:12.742936 | orchestrator | 2026-04-11 02:40:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:12.743032 | orchestrator | 2026-04-11 02:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:15.789990 | orchestrator | 2026-04-11 02:40:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:15.792067 | orchestrator | 2026-04-11 02:40:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:15.792152 | orchestrator | 2026-04-11 02:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:18.844632 | orchestrator | 2026-04-11 02:40:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:18.846481 | orchestrator | 2026-04-11 02:40:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:18.846529 | orchestrator | 2026-04-11 02:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:21.897331 | orchestrator | 2026-04-11 02:40:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:21.900836 | orchestrator | 2026-04-11 02:40:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:21.900942 | orchestrator | 2026-04-11 02:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:24.951705 | orchestrator | 2026-04-11 02:40:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:24.954631 | orchestrator | 2026-04-11 02:40:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:24.954700 | orchestrator | 2026-04-11 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:28.003618 | orchestrator | 2026-04-11 02:40:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:28.005486 | orchestrator | 2026-04-11 02:40:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:28.005570 | orchestrator | 2026-04-11 02:40:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:31.056687 | orchestrator | 2026-04-11 02:40:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:31.059117 | orchestrator | 2026-04-11 02:40:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:31.059199 | orchestrator | 2026-04-11 02:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:34.114928 | orchestrator | 2026-04-11 02:40:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:34.116082 | orchestrator | 2026-04-11 02:40:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:34.116121 | orchestrator | 2026-04-11 02:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:37.158381 | orchestrator | 2026-04-11 02:40:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:37.159944 | orchestrator | 2026-04-11 02:40:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:37.159999 | orchestrator | 2026-04-11 02:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:40.214418 | orchestrator | 2026-04-11 02:40:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:40.216453 | orchestrator | 2026-04-11 02:40:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:40.216516 | orchestrator | 2026-04-11 02:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:43.272661 | orchestrator | 2026-04-11 02:40:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:43.274070 | orchestrator | 2026-04-11 02:40:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:43.274172 | orchestrator | 2026-04-11 02:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:46.325403 | orchestrator | 2026-04-11 02:40:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:46.326002 | orchestrator | 2026-04-11 02:40:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:46.326253 | orchestrator | 2026-04-11 02:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:49.375445 | orchestrator | 2026-04-11 02:40:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:49.379677 | orchestrator | 2026-04-11 02:40:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:49.380437 | orchestrator | 2026-04-11 02:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:52.429828 | orchestrator | 2026-04-11 02:40:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:52.432526 | orchestrator | 2026-04-11 02:40:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:52.432607 | orchestrator | 2026-04-11 02:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:55.482994 | orchestrator | 2026-04-11 02:40:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:55.485579 | orchestrator | 2026-04-11 02:40:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:55.485640 | orchestrator | 2026-04-11 02:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:40:58.536718 | orchestrator | 2026-04-11 02:40:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:40:58.538404 | orchestrator | 2026-04-11 02:40:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:40:58.538505 | orchestrator | 2026-04-11 02:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:01.585664 | orchestrator | 2026-04-11 02:41:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:01.586256 | orchestrator | 2026-04-11 02:41:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:01.586854 | orchestrator | 2026-04-11 02:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:04.648615 | orchestrator | 2026-04-11 02:41:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:04.651373 | orchestrator | 2026-04-11 02:41:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:04.651447 | orchestrator | 2026-04-11 02:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:07.699059 | orchestrator | 2026-04-11 02:41:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:07.700884 | orchestrator | 2026-04-11 02:41:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:07.701299 | orchestrator | 2026-04-11 02:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:10.752049 | orchestrator | 2026-04-11 02:41:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:10.754163 | orchestrator | 2026-04-11 02:41:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:10.754217 | orchestrator | 2026-04-11 02:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:13.803729 | orchestrator | 2026-04-11 02:41:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:13.805276 | orchestrator | 2026-04-11 02:41:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:13.805316 | orchestrator | 2026-04-11 02:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:16.851418 | orchestrator | 2026-04-11 02:41:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:16.852964 | orchestrator | 2026-04-11 02:41:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:16.853031 | orchestrator | 2026-04-11 02:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:19.900809 | orchestrator | 2026-04-11 02:41:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:19.903805 | orchestrator | 2026-04-11 02:41:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:19.907197 | orchestrator | 2026-04-11 02:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:22.953307 | orchestrator | 2026-04-11 02:41:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:22.955194 | orchestrator | 2026-04-11 02:41:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:22.955244 | orchestrator | 2026-04-11 02:41:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:26.001532 | orchestrator | 2026-04-11 02:41:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:26.004593 | orchestrator | 2026-04-11 02:41:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:26.005413 | orchestrator | 2026-04-11 02:41:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:29.055208 | orchestrator | 2026-04-11 02:41:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:29.057723 | orchestrator | 2026-04-11 02:41:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:29.057830 | orchestrator | 2026-04-11 02:41:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:32.110002 | orchestrator | 2026-04-11 02:41:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:32.111529 | orchestrator | 2026-04-11 02:41:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:32.111643 | orchestrator | 2026-04-11 02:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:35.162311 | orchestrator | 2026-04-11 02:41:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:35.163878 | orchestrator | 2026-04-11 02:41:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:35.164007 | orchestrator | 2026-04-11 02:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:38.214667 | orchestrator | 2026-04-11 02:41:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:38.216127 | orchestrator | 2026-04-11 02:41:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:38.216197 | orchestrator | 2026-04-11 02:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:41.265803 | orchestrator | 2026-04-11 02:41:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:41.268131 | orchestrator | 2026-04-11 02:41:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:41.268303 | orchestrator | 2026-04-11 02:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:44.314214 | orchestrator | 2026-04-11 02:41:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:44.316136 | orchestrator | 2026-04-11 02:41:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:44.316180 | orchestrator | 2026-04-11 02:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:47.365875 | orchestrator | 2026-04-11 02:41:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:47.367092 | orchestrator | 2026-04-11 02:41:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:47.367152 | orchestrator | 2026-04-11 02:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:50.416047 | orchestrator | 2026-04-11 02:41:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:50.418249 | orchestrator | 2026-04-11 02:41:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:50.418297 | orchestrator | 2026-04-11 02:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:53.467395 | orchestrator | 2026-04-11 02:41:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:53.469680 | orchestrator | 2026-04-11 02:41:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:53.469734 | orchestrator | 2026-04-11 02:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:56.518621 | orchestrator | 2026-04-11 02:41:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:56.520830 | orchestrator | 2026-04-11 02:41:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:56.521410 | orchestrator | 2026-04-11 02:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:41:59.572694 | orchestrator | 2026-04-11 02:41:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:41:59.575408 | orchestrator | 2026-04-11 02:41:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:41:59.575479 | orchestrator | 2026-04-11 02:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:02.627839 | orchestrator | 2026-04-11 02:42:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:02.629695 | orchestrator | 2026-04-11 02:42:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:02.629839 | orchestrator | 2026-04-11 02:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:05.678439 | orchestrator | 2026-04-11 02:42:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:05.680401 | orchestrator | 2026-04-11 02:42:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:05.680460 | orchestrator | 2026-04-11 02:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:08.733205 | orchestrator | 2026-04-11 02:42:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:08.734660 | orchestrator | 2026-04-11 02:42:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:08.734831 | orchestrator | 2026-04-11 02:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:11.782918 | orchestrator | 2026-04-11 02:42:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:11.784745 | orchestrator | 2026-04-11 02:42:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:11.784854 | orchestrator | 2026-04-11 02:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:14.832823 | orchestrator | 2026-04-11 02:42:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:14.834689 | orchestrator | 2026-04-11 02:42:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:14.834740 | orchestrator | 2026-04-11 02:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:17.882849 | orchestrator | 2026-04-11 02:42:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:17.884663 | orchestrator | 2026-04-11 02:42:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:17.884840 | orchestrator | 2026-04-11 02:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:20.936831 | orchestrator | 2026-04-11 02:42:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:20.938203 | orchestrator | 2026-04-11 02:42:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:20.938470 | orchestrator | 2026-04-11 02:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:23.992289 | orchestrator | 2026-04-11 02:42:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:23.995253 | orchestrator | 2026-04-11 02:42:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:23.995306 | orchestrator | 2026-04-11 02:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:27.047917 | orchestrator | 2026-04-11 02:42:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:27.049214 | orchestrator | 2026-04-11 02:42:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:27.049255 | orchestrator | 2026-04-11 02:42:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:30.100715 | orchestrator | 2026-04-11 02:42:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:30.104002 | orchestrator | 2026-04-11 02:42:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:30.104057 | orchestrator | 2026-04-11 02:42:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:33.155702 | orchestrator | 2026-04-11 02:42:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:33.159047 | orchestrator | 2026-04-11 02:42:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:33.159202 | orchestrator | 2026-04-11 02:42:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:36.209859 | orchestrator | 2026-04-11 02:42:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:36.211903 | orchestrator | 2026-04-11 02:42:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:36.211979 | orchestrator | 2026-04-11 02:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:39.268931 | orchestrator | 2026-04-11 02:42:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:39.271652 | orchestrator | 2026-04-11 02:42:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:39.271731 | orchestrator | 2026-04-11 02:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:42.321058 | orchestrator | 2026-04-11 02:42:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:42.323117 | orchestrator | 2026-04-11 02:42:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:42.323559 | orchestrator | 2026-04-11 02:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:45.370431 | orchestrator | 2026-04-11 02:42:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:45.372429 | orchestrator | 2026-04-11 02:42:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:45.372477 | orchestrator | 2026-04-11 02:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:48.421511 | orchestrator | 2026-04-11 02:42:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:48.424311 | orchestrator | 2026-04-11 02:42:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:48.424381 | orchestrator | 2026-04-11 02:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:51.471671 | orchestrator | 2026-04-11 02:42:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:51.475676 | orchestrator | 2026-04-11 02:42:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:51.475843 | orchestrator | 2026-04-11 02:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:54.521769 | orchestrator | 2026-04-11 02:42:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:54.523635 | orchestrator | 2026-04-11 02:42:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:54.523687 | orchestrator | 2026-04-11 02:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:42:57.568048 | orchestrator | 2026-04-11 02:42:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:42:57.570764 | orchestrator | 2026-04-11 02:42:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:42:57.570934 | orchestrator | 2026-04-11 02:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:00.622973 | orchestrator | 2026-04-11 02:43:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:00.624886 | orchestrator | 2026-04-11 02:43:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:00.625000 | orchestrator | 2026-04-11 02:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:03.676445 | orchestrator | 2026-04-11 02:43:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:03.678447 | orchestrator | 2026-04-11 02:43:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:03.678500 | orchestrator | 2026-04-11 02:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:06.726785 | orchestrator | 2026-04-11 02:43:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:06.729601 | orchestrator | 2026-04-11 02:43:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:06.729675 | orchestrator | 2026-04-11 02:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:09.784288 | orchestrator | 2026-04-11 02:43:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:09.785692 | orchestrator | 2026-04-11 02:43:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:09.785747 | orchestrator | 2026-04-11 02:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:12.839272 | orchestrator | 2026-04-11 02:43:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:12.843066 | orchestrator | 2026-04-11 02:43:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:12.843141 | orchestrator | 2026-04-11 02:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:15.888101 | orchestrator | 2026-04-11 02:43:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:15.890105 | orchestrator | 2026-04-11 02:43:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:15.890147 | orchestrator | 2026-04-11 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:18.934325 | orchestrator | 2026-04-11 02:43:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:18.938118 | orchestrator | 2026-04-11 02:43:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:18.938252 | orchestrator | 2026-04-11 02:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:21.987924 | orchestrator | 2026-04-11 02:43:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:21.989063 | orchestrator | 2026-04-11 02:43:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:21.989810 | orchestrator | 2026-04-11 02:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:25.041207 | orchestrator | 2026-04-11 02:43:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:25.043483 | orchestrator | 2026-04-11 02:43:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:25.043594 | orchestrator | 2026-04-11 02:43:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:28.092328 | orchestrator | 2026-04-11 02:43:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:28.093817 | orchestrator | 2026-04-11 02:43:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:28.093914 | orchestrator | 2026-04-11 02:43:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:31.135101 | orchestrator | 2026-04-11 02:43:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:31.137274 | orchestrator | 2026-04-11 02:43:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:31.137359 | orchestrator | 2026-04-11 02:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:34.183009 | orchestrator | 2026-04-11 02:43:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:34.184181 | orchestrator | 2026-04-11 02:43:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:34.184312 | orchestrator | 2026-04-11 02:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:37.236328 | orchestrator | 2026-04-11 02:43:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:37.236665 | orchestrator | 2026-04-11 02:43:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:37.236744 | orchestrator | 2026-04-11 02:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:40.284134 | orchestrator | 2026-04-11 02:43:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:40.284867 | orchestrator | 2026-04-11 02:43:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:40.284929 | orchestrator | 2026-04-11 02:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:43.340548 | orchestrator | 2026-04-11 02:43:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:43.342617 | orchestrator | 2026-04-11 02:43:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:43.342686 | orchestrator | 2026-04-11 02:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:46.383384 | orchestrator | 2026-04-11 02:43:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:46.383574 | orchestrator | 2026-04-11 02:43:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:46.383761 | orchestrator | 2026-04-11 02:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:49.429456 | orchestrator | 2026-04-11 02:43:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:49.431249 | orchestrator | 2026-04-11 02:43:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:49.431312 | orchestrator | 2026-04-11 02:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:52.481892 | orchestrator | 2026-04-11 02:43:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:52.484086 | orchestrator | 2026-04-11 02:43:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:52.484136 | orchestrator | 2026-04-11 02:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:55.529926 | orchestrator | 2026-04-11 02:43:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:55.531354 | orchestrator | 2026-04-11 02:43:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:55.531399 | orchestrator | 2026-04-11 02:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:43:58.582676 | orchestrator | 2026-04-11 02:43:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:43:58.585291 | orchestrator | 2026-04-11 02:43:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:43:58.585355 | orchestrator | 2026-04-11 02:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:01.630469 | orchestrator | 2026-04-11 02:44:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:01.633050 | orchestrator | 2026-04-11 02:44:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:01.633145 | orchestrator | 2026-04-11 02:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:04.674257 | orchestrator | 2026-04-11 02:44:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:04.676194 | orchestrator | 2026-04-11 02:44:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:04.676352 | orchestrator | 2026-04-11 02:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:07.725129 | orchestrator | 2026-04-11 02:44:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:07.727318 | orchestrator | 2026-04-11 02:44:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:07.727414 | orchestrator | 2026-04-11 02:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:10.770815 | orchestrator | 2026-04-11 02:44:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:10.772428 | orchestrator | 2026-04-11 02:44:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:10.772500 | orchestrator | 2026-04-11 02:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:13.824952 | orchestrator | 2026-04-11 02:44:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:13.826516 | orchestrator | 2026-04-11 02:44:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:13.827279 | orchestrator | 2026-04-11 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:16.879430 | orchestrator | 2026-04-11 02:44:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:16.881402 | orchestrator | 2026-04-11 02:44:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:16.881462 | orchestrator | 2026-04-11 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:19.937141 | orchestrator | 2026-04-11 02:44:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:19.939625 | orchestrator | 2026-04-11 02:44:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:19.939828 | orchestrator | 2026-04-11 02:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:22.992469 | orchestrator | 2026-04-11 02:44:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:22.994414 | orchestrator | 2026-04-11 02:44:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:22.994493 | orchestrator | 2026-04-11 02:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:26.041122 | orchestrator | 2026-04-11 02:44:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:26.045071 | orchestrator | 2026-04-11 02:44:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:26.045204 | orchestrator | 2026-04-11 02:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:29.100598 | orchestrator | 2026-04-11 02:44:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:29.102529 | orchestrator | 2026-04-11 02:44:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:29.102581 | orchestrator | 2026-04-11 02:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:32.148020 | orchestrator | 2026-04-11 02:44:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:32.148871 | orchestrator | 2026-04-11 02:44:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:32.148929 | orchestrator | 2026-04-11 02:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:35.199993 | orchestrator | 2026-04-11 02:44:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:35.201746 | orchestrator | 2026-04-11 02:44:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:35.201858 | orchestrator | 2026-04-11 02:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:38.249204 | orchestrator | 2026-04-11 02:44:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:38.250212 | orchestrator | 2026-04-11 02:44:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:38.250294 | orchestrator | 2026-04-11 02:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:41.297246 | orchestrator | 2026-04-11 02:44:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:41.299246 | orchestrator | 2026-04-11 02:44:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:41.299354 | orchestrator | 2026-04-11 02:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:44.341772 | orchestrator | 2026-04-11 02:44:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:44.344171 | orchestrator | 2026-04-11 02:44:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:44.344287 | orchestrator | 2026-04-11 02:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:47.395183 | orchestrator | 2026-04-11 02:44:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:47.397586 | orchestrator | 2026-04-11 02:44:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:47.397731 | orchestrator | 2026-04-11 02:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:50.446518 | orchestrator | 2026-04-11 02:44:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:50.448990 | orchestrator | 2026-04-11 02:44:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:50.449149 | orchestrator | 2026-04-11 02:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:53.496999 | orchestrator | 2026-04-11 02:44:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:53.498699 | orchestrator | 2026-04-11 02:44:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:53.498954 | orchestrator | 2026-04-11 02:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:56.548732 | orchestrator | 2026-04-11 02:44:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:56.550995 | orchestrator | 2026-04-11 02:44:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:56.551336 | orchestrator | 2026-04-11 02:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:44:59.598069 | orchestrator | 2026-04-11 02:44:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:44:59.599643 | orchestrator | 2026-04-11 02:44:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:44:59.599717 | orchestrator | 2026-04-11 02:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:02.643775 | orchestrator | 2026-04-11 02:45:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:02.645777 | orchestrator | 2026-04-11 02:45:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:02.646231 | orchestrator | 2026-04-11 02:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:05.703212 | orchestrator | 2026-04-11 02:45:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:05.704694 | orchestrator | 2026-04-11 02:45:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:05.704756 | orchestrator | 2026-04-11 02:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:08.748532 | orchestrator | 2026-04-11 02:45:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:08.751563 | orchestrator | 2026-04-11 02:45:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:08.751650 | orchestrator | 2026-04-11 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:11.800109 | orchestrator | 2026-04-11 02:45:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:11.804070 | orchestrator | 2026-04-11 02:45:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:11.804616 | orchestrator | 2026-04-11 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:14.849688 | orchestrator | 2026-04-11 02:45:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:14.852550 | orchestrator | 2026-04-11 02:45:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:14.852646 | orchestrator | 2026-04-11 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:17.900937 | orchestrator | 2026-04-11 02:45:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:17.901782 | orchestrator | 2026-04-11 02:45:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:17.901820 | orchestrator | 2026-04-11 02:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:20.952888 | orchestrator | 2026-04-11 02:45:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:20.955225 | orchestrator | 2026-04-11 02:45:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:20.955308 | orchestrator | 2026-04-11 02:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:24.002376 | orchestrator | 2026-04-11 02:45:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:24.005579 | orchestrator | 2026-04-11 02:45:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:24.005670 | orchestrator | 2026-04-11 02:45:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:27.049302 | orchestrator | 2026-04-11 02:45:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:27.051279 | orchestrator | 2026-04-11 02:45:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:27.051327 | orchestrator | 2026-04-11 02:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:30.102298 | orchestrator | 2026-04-11 02:45:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:30.104521 | orchestrator | 2026-04-11 02:45:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:30.104625 | orchestrator | 2026-04-11 02:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:33.157636 | orchestrator | 2026-04-11 02:45:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:33.160742 | orchestrator | 2026-04-11 02:45:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:33.160841 | orchestrator | 2026-04-11 02:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:36.204354 | orchestrator | 2026-04-11 02:45:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:36.204913 | orchestrator | 2026-04-11 02:45:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:36.204936 | orchestrator | 2026-04-11 02:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:39.256415 | orchestrator | 2026-04-11 02:45:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:39.258223 | orchestrator | 2026-04-11 02:45:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:39.258343 | orchestrator | 2026-04-11 02:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:42.300931 | orchestrator | 2026-04-11 02:45:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:42.302507 | orchestrator | 2026-04-11 02:45:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:42.302575 | orchestrator | 2026-04-11 02:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:45.346321 | orchestrator | 2026-04-11 02:45:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:45.348028 | orchestrator | 2026-04-11 02:45:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:45.348090 | orchestrator | 2026-04-11 02:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:48.391433 | orchestrator | 2026-04-11 02:45:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:48.392683 | orchestrator | 2026-04-11 02:45:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:48.392738 | orchestrator | 2026-04-11 02:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:51.434751 | orchestrator | 2026-04-11 02:45:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:51.435826 | orchestrator | 2026-04-11 02:45:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:51.436002 | orchestrator | 2026-04-11 02:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:54.475805 | orchestrator | 2026-04-11 02:45:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:54.476576 | orchestrator | 2026-04-11 02:45:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:54.476622 | orchestrator | 2026-04-11 02:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:45:57.522775 | orchestrator | 2026-04-11 02:45:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:45:57.524323 | orchestrator | 2026-04-11 02:45:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:45:57.524355 | orchestrator | 2026-04-11 02:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:00.570430 | orchestrator | 2026-04-11 02:46:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:00.571396 | orchestrator | 2026-04-11 02:46:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:00.571469 | orchestrator | 2026-04-11 02:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:03.613904 | orchestrator | 2026-04-11 02:46:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:03.614394 | orchestrator | 2026-04-11 02:46:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:03.614448 | orchestrator | 2026-04-11 02:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:06.656466 | orchestrator | 2026-04-11 02:46:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:06.658068 | orchestrator | 2026-04-11 02:46:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:06.658103 | orchestrator | 2026-04-11 02:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:09.705080 | orchestrator | 2026-04-11 02:46:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:09.706861 | orchestrator | 2026-04-11 02:46:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:09.706903 | orchestrator | 2026-04-11 02:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:12.750915 | orchestrator | 2026-04-11 02:46:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:12.752475 | orchestrator | 2026-04-11 02:46:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:12.752518 | orchestrator | 2026-04-11 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:15.798671 | orchestrator | 2026-04-11 02:46:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:15.801198 | orchestrator | 2026-04-11 02:46:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:15.801297 | orchestrator | 2026-04-11 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:18.858013 | orchestrator | 2026-04-11 02:46:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:18.859283 | orchestrator | 2026-04-11 02:46:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:18.859323 | orchestrator | 2026-04-11 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:21.912007 | orchestrator | 2026-04-11 02:46:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:21.913735 | orchestrator | 2026-04-11 02:46:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:21.913829 | orchestrator | 2026-04-11 02:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:24.966454 | orchestrator | 2026-04-11 02:46:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:24.968457 | orchestrator | 2026-04-11 02:46:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:24.968552 | orchestrator | 2026-04-11 02:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:28.011807 | orchestrator | 2026-04-11 02:46:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:28.013616 | orchestrator | 2026-04-11 02:46:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:28.013679 | orchestrator | 2026-04-11 02:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:31.069003 | orchestrator | 2026-04-11 02:46:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:31.070829 | orchestrator | 2026-04-11 02:46:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:31.070886 | orchestrator | 2026-04-11 02:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:34.125458 | orchestrator | 2026-04-11 02:46:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:34.127629 | orchestrator | 2026-04-11 02:46:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:34.127679 | orchestrator | 2026-04-11 02:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:37.172865 | orchestrator | 2026-04-11 02:46:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:37.174064 | orchestrator | 2026-04-11 02:46:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:37.174472 | orchestrator | 2026-04-11 02:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:40.224017 | orchestrator | 2026-04-11 02:46:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:40.227607 | orchestrator | 2026-04-11 02:46:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:40.227731 | orchestrator | 2026-04-11 02:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:43.278387 | orchestrator | 2026-04-11 02:46:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:43.281483 | orchestrator | 2026-04-11 02:46:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:43.281534 | orchestrator | 2026-04-11 02:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:46.328736 | orchestrator | 2026-04-11 02:46:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:46.332675 | orchestrator | 2026-04-11 02:46:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:46.332739 | orchestrator | 2026-04-11 02:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:49.376658 | orchestrator | 2026-04-11 02:46:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:49.378166 | orchestrator | 2026-04-11 02:46:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:49.378239 | orchestrator | 2026-04-11 02:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:52.423660 | orchestrator | 2026-04-11 02:46:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:52.425348 | orchestrator | 2026-04-11 02:46:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:52.425392 | orchestrator | 2026-04-11 02:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:55.473229 | orchestrator | 2026-04-11 02:46:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:55.475381 | orchestrator | 2026-04-11 02:46:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:55.475461 | orchestrator | 2026-04-11 02:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:46:58.527673 | orchestrator | 2026-04-11 02:46:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:46:58.530484 | orchestrator | 2026-04-11 02:46:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:46:58.530589 | orchestrator | 2026-04-11 02:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:01.580168 | orchestrator | 2026-04-11 02:47:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:01.581940 | orchestrator | 2026-04-11 02:47:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:01.582096 | orchestrator | 2026-04-11 02:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:04.630749 | orchestrator | 2026-04-11 02:47:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:04.631332 | orchestrator | 2026-04-11 02:47:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:04.631442 | orchestrator | 2026-04-11 02:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:07.677771 | orchestrator | 2026-04-11 02:47:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:07.679225 | orchestrator | 2026-04-11 02:47:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:07.679289 | orchestrator | 2026-04-11 02:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:10.728774 | orchestrator | 2026-04-11 02:47:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:10.730831 | orchestrator | 2026-04-11 02:47:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:10.730960 | orchestrator | 2026-04-11 02:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:13.775475 | orchestrator | 2026-04-11 02:47:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:13.777423 | orchestrator | 2026-04-11 02:47:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:13.777476 | orchestrator | 2026-04-11 02:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:16.826356 | orchestrator | 2026-04-11 02:47:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:16.828858 | orchestrator | 2026-04-11 02:47:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:16.828909 | orchestrator | 2026-04-11 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:19.879582 | orchestrator | 2026-04-11 02:47:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:19.882334 | orchestrator | 2026-04-11 02:47:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:19.882405 | orchestrator | 2026-04-11 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:22.937019 | orchestrator | 2026-04-11 02:47:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:22.939693 | orchestrator | 2026-04-11 02:47:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:22.939757 | orchestrator | 2026-04-11 02:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:25.983303 | orchestrator | 2026-04-11 02:47:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:25.985904 | orchestrator | 2026-04-11 02:47:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:25.985964 | orchestrator | 2026-04-11 02:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:29.032842 | orchestrator | 2026-04-11 02:47:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:29.035778 | orchestrator | 2026-04-11 02:47:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:29.035865 | orchestrator | 2026-04-11 02:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:32.083211 | orchestrator | 2026-04-11 02:47:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:32.085302 | orchestrator | 2026-04-11 02:47:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:32.085363 | orchestrator | 2026-04-11 02:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:35.139847 | orchestrator | 2026-04-11 02:47:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:35.142810 | orchestrator | 2026-04-11 02:47:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:35.142902 | orchestrator | 2026-04-11 02:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:38.185494 | orchestrator | 2026-04-11 02:47:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:38.186604 | orchestrator | 2026-04-11 02:47:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:38.186816 | orchestrator | 2026-04-11 02:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:41.233742 | orchestrator | 2026-04-11 02:47:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:41.235334 | orchestrator | 2026-04-11 02:47:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:41.235362 | orchestrator | 2026-04-11 02:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:44.285279 | orchestrator | 2026-04-11 02:47:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:44.287028 | orchestrator | 2026-04-11 02:47:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:44.287080 | orchestrator | 2026-04-11 02:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:47.337771 | orchestrator | 2026-04-11 02:47:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:47.338917 | orchestrator | 2026-04-11 02:47:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:47.338993 | orchestrator | 2026-04-11 02:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:50.391818 | orchestrator | 2026-04-11 02:47:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:50.393224 | orchestrator | 2026-04-11 02:47:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:50.393282 | orchestrator | 2026-04-11 02:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:53.448811 | orchestrator | 2026-04-11 02:47:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:53.451552 | orchestrator | 2026-04-11 02:47:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:53.451665 | orchestrator | 2026-04-11 02:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:56.494776 | orchestrator | 2026-04-11 02:47:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:56.497150 | orchestrator | 2026-04-11 02:47:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:56.497281 | orchestrator | 2026-04-11 02:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:47:59.544527 | orchestrator | 2026-04-11 02:47:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:47:59.547355 | orchestrator | 2026-04-11 02:47:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:47:59.547436 | orchestrator | 2026-04-11 02:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:02.596997 | orchestrator | 2026-04-11 02:48:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:02.597832 | orchestrator | 2026-04-11 02:48:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:02.598115 | orchestrator | 2026-04-11 02:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:05.650334 | orchestrator | 2026-04-11 02:48:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:05.653102 | orchestrator | 2026-04-11 02:48:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:05.653233 | orchestrator | 2026-04-11 02:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:08.707812 | orchestrator | 2026-04-11 02:48:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:08.709839 | orchestrator | 2026-04-11 02:48:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:08.709940 | orchestrator | 2026-04-11 02:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:11.755555 | orchestrator | 2026-04-11 02:48:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:11.757116 | orchestrator | 2026-04-11 02:48:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:11.757188 | orchestrator | 2026-04-11 02:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:14.800094 | orchestrator | 2026-04-11 02:48:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:14.800884 | orchestrator | 2026-04-11 02:48:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:14.801058 | orchestrator | 2026-04-11 02:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:17.853118 | orchestrator | 2026-04-11 02:48:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:17.855892 | orchestrator | 2026-04-11 02:48:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:17.855938 | orchestrator | 2026-04-11 02:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:20.904856 | orchestrator | 2026-04-11 02:48:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:20.905363 | orchestrator | 2026-04-11 02:48:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:20.905453 | orchestrator | 2026-04-11 02:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:23.955784 | orchestrator | 2026-04-11 02:48:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:23.957647 | orchestrator | 2026-04-11 02:48:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:23.957749 | orchestrator | 2026-04-11 02:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:27.007352 | orchestrator | 2026-04-11 02:48:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:27.008797 | orchestrator | 2026-04-11 02:48:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:27.008917 | orchestrator | 2026-04-11 02:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:30.059710 | orchestrator | 2026-04-11 02:48:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:30.060921 | orchestrator | 2026-04-11 02:48:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:30.060962 | orchestrator | 2026-04-11 02:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:33.112156 | orchestrator | 2026-04-11 02:48:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:33.113991 | orchestrator | 2026-04-11 02:48:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:33.114306 | orchestrator | 2026-04-11 02:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:36.166167 | orchestrator | 2026-04-11 02:48:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:36.168070 | orchestrator | 2026-04-11 02:48:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:36.168150 | orchestrator | 2026-04-11 02:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:39.211828 | orchestrator | 2026-04-11 02:48:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:39.214494 | orchestrator | 2026-04-11 02:48:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:39.214564 | orchestrator | 2026-04-11 02:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:42.258098 | orchestrator | 2026-04-11 02:48:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:42.259254 | orchestrator | 2026-04-11 02:48:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:42.259308 | orchestrator | 2026-04-11 02:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:45.314760 | orchestrator | 2026-04-11 02:48:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:45.317190 | orchestrator | 2026-04-11 02:48:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:45.317278 | orchestrator | 2026-04-11 02:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:48.361013 | orchestrator | 2026-04-11 02:48:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:48.362704 | orchestrator | 2026-04-11 02:48:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:48.362745 | orchestrator | 2026-04-11 02:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:51.414637 | orchestrator | 2026-04-11 02:48:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:51.416404 | orchestrator | 2026-04-11 02:48:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:51.416491 | orchestrator | 2026-04-11 02:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:54.462906 | orchestrator | 2026-04-11 02:48:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:54.464799 | orchestrator | 2026-04-11 02:48:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:54.464884 | orchestrator | 2026-04-11 02:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:48:57.512839 | orchestrator | 2026-04-11 02:48:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:48:57.515489 | orchestrator | 2026-04-11 02:48:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:48:57.515542 | orchestrator | 2026-04-11 02:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:00.567056 | orchestrator | 2026-04-11 02:49:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:00.569480 | orchestrator | 2026-04-11 02:49:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:00.569699 | orchestrator | 2026-04-11 02:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:03.624801 | orchestrator | 2026-04-11 02:49:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:03.627007 | orchestrator | 2026-04-11 02:49:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:03.627102 | orchestrator | 2026-04-11 02:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:06.678126 | orchestrator | 2026-04-11 02:49:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:06.678984 | orchestrator | 2026-04-11 02:49:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:06.679030 | orchestrator | 2026-04-11 02:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:09.729141 | orchestrator | 2026-04-11 02:49:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:09.732126 | orchestrator | 2026-04-11 02:49:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:09.732192 | orchestrator | 2026-04-11 02:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:12.773210 | orchestrator | 2026-04-11 02:49:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:12.775773 | orchestrator | 2026-04-11 02:49:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:12.775833 | orchestrator | 2026-04-11 02:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:15.817528 | orchestrator | 2026-04-11 02:49:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:15.819639 | orchestrator | 2026-04-11 02:49:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:15.819686 | orchestrator | 2026-04-11 02:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:18.873044 | orchestrator | 2026-04-11 02:49:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:18.874881 | orchestrator | 2026-04-11 02:49:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:18.874910 | orchestrator | 2026-04-11 02:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:21.930418 | orchestrator | 2026-04-11 02:49:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:21.932428 | orchestrator | 2026-04-11 02:49:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:21.932574 | orchestrator | 2026-04-11 02:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:24.988571 | orchestrator | 2026-04-11 02:49:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:24.993685 | orchestrator | 2026-04-11 02:49:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:24.993789 | orchestrator | 2026-04-11 02:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:28.050648 | orchestrator | 2026-04-11 02:49:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:28.052953 | orchestrator | 2026-04-11 02:49:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:28.053140 | orchestrator | 2026-04-11 02:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:31.101968 | orchestrator | 2026-04-11 02:49:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:31.102971 | orchestrator | 2026-04-11 02:49:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:31.103020 | orchestrator | 2026-04-11 02:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:34.150720 | orchestrator | 2026-04-11 02:49:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:34.152503 | orchestrator | 2026-04-11 02:49:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:34.152572 | orchestrator | 2026-04-11 02:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:37.206777 | orchestrator | 2026-04-11 02:49:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:37.209555 | orchestrator | 2026-04-11 02:49:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:37.209640 | orchestrator | 2026-04-11 02:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:40.261334 | orchestrator | 2026-04-11 02:49:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:40.264012 | orchestrator | 2026-04-11 02:49:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:40.264081 | orchestrator | 2026-04-11 02:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:43.319736 | orchestrator | 2026-04-11 02:49:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:43.322285 | orchestrator | 2026-04-11 02:49:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:43.322345 | orchestrator | 2026-04-11 02:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:46.364901 | orchestrator | 2026-04-11 02:49:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:46.366254 | orchestrator | 2026-04-11 02:49:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:46.366333 | orchestrator | 2026-04-11 02:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:49.415769 | orchestrator | 2026-04-11 02:49:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:49.417068 | orchestrator | 2026-04-11 02:49:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:49.417141 | orchestrator | 2026-04-11 02:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:52.465788 | orchestrator | 2026-04-11 02:49:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:52.467531 | orchestrator | 2026-04-11 02:49:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:52.467582 | orchestrator | 2026-04-11 02:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:55.517997 | orchestrator | 2026-04-11 02:49:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:55.518553 | orchestrator | 2026-04-11 02:49:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:55.518579 | orchestrator | 2026-04-11 02:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:49:58.570330 | orchestrator | 2026-04-11 02:49:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:49:58.572255 | orchestrator | 2026-04-11 02:49:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:49:58.572335 | orchestrator | 2026-04-11 02:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:01.625734 | orchestrator | 2026-04-11 02:50:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:01.626981 | orchestrator | 2026-04-11 02:50:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:01.627109 | orchestrator | 2026-04-11 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:04.675068 | orchestrator | 2026-04-11 02:50:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:04.677831 | orchestrator | 2026-04-11 02:50:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:04.677900 | orchestrator | 2026-04-11 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:07.725518 | orchestrator | 2026-04-11 02:50:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:07.726772 | orchestrator | 2026-04-11 02:50:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:07.726822 | orchestrator | 2026-04-11 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:10.778515 | orchestrator | 2026-04-11 02:50:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:10.780935 | orchestrator | 2026-04-11 02:50:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:10.781029 | orchestrator | 2026-04-11 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:13.827468 | orchestrator | 2026-04-11 02:50:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:13.828177 | orchestrator | 2026-04-11 02:50:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:13.828328 | orchestrator | 2026-04-11 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:16.879030 | orchestrator | 2026-04-11 02:50:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:16.881332 | orchestrator | 2026-04-11 02:50:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:16.881396 | orchestrator | 2026-04-11 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:19.926645 | orchestrator | 2026-04-11 02:50:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:19.928387 | orchestrator | 2026-04-11 02:50:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:19.928469 | orchestrator | 2026-04-11 02:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:22.970955 | orchestrator | 2026-04-11 02:50:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:22.974801 | orchestrator | 2026-04-11 02:50:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:22.974882 | orchestrator | 2026-04-11 02:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:26.028147 | orchestrator | 2026-04-11 02:50:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:26.029952 | orchestrator | 2026-04-11 02:50:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:26.030075 | orchestrator | 2026-04-11 02:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:29.084491 | orchestrator | 2026-04-11 02:50:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:29.086805 | orchestrator | 2026-04-11 02:50:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:29.086904 | orchestrator | 2026-04-11 02:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:32.143848 | orchestrator | 2026-04-11 02:50:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:32.146365 | orchestrator | 2026-04-11 02:50:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:32.146453 | orchestrator | 2026-04-11 02:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:35.187061 | orchestrator | 2026-04-11 02:50:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:35.187638 | orchestrator | 2026-04-11 02:50:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:35.187690 | orchestrator | 2026-04-11 02:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:38.237207 | orchestrator | 2026-04-11 02:50:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:38.239613 | orchestrator | 2026-04-11 02:50:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:38.239660 | orchestrator | 2026-04-11 02:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:41.293703 | orchestrator | 2026-04-11 02:50:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:41.295576 | orchestrator | 2026-04-11 02:50:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:41.295670 | orchestrator | 2026-04-11 02:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:44.341923 | orchestrator | 2026-04-11 02:50:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:44.343518 | orchestrator | 2026-04-11 02:50:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:44.343611 | orchestrator | 2026-04-11 02:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:47.396189 | orchestrator | 2026-04-11 02:50:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:47.397309 | orchestrator | 2026-04-11 02:50:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:47.397355 | orchestrator | 2026-04-11 02:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:50.446268 | orchestrator | 2026-04-11 02:50:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:50.446629 | orchestrator | 2026-04-11 02:50:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:50.446686 | orchestrator | 2026-04-11 02:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:53.494985 | orchestrator | 2026-04-11 02:50:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:53.496593 | orchestrator | 2026-04-11 02:50:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:53.496844 | orchestrator | 2026-04-11 02:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:56.548379 | orchestrator | 2026-04-11 02:50:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:56.549887 | orchestrator | 2026-04-11 02:50:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:56.549944 | orchestrator | 2026-04-11 02:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:50:59.600805 | orchestrator | 2026-04-11 02:50:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:50:59.604950 | orchestrator | 2026-04-11 02:50:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:50:59.605009 | orchestrator | 2026-04-11 02:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:02.652299 | orchestrator | 2026-04-11 02:51:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:02.654195 | orchestrator | 2026-04-11 02:51:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:02.654490 | orchestrator | 2026-04-11 02:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:05.707193 | orchestrator | 2026-04-11 02:51:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:05.709662 | orchestrator | 2026-04-11 02:51:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:05.709721 | orchestrator | 2026-04-11 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:08.757203 | orchestrator | 2026-04-11 02:51:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:08.759269 | orchestrator | 2026-04-11 02:51:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:08.759310 | orchestrator | 2026-04-11 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:11.809876 | orchestrator | 2026-04-11 02:51:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:11.811348 | orchestrator | 2026-04-11 02:51:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:11.811427 | orchestrator | 2026-04-11 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:14.857602 | orchestrator | 2026-04-11 02:51:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:14.859572 | orchestrator | 2026-04-11 02:51:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:14.859696 | orchestrator | 2026-04-11 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:17.905621 | orchestrator | 2026-04-11 02:51:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:17.907217 | orchestrator | 2026-04-11 02:51:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:17.907278 | orchestrator | 2026-04-11 02:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:20.954158 | orchestrator | 2026-04-11 02:51:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:20.956104 | orchestrator | 2026-04-11 02:51:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:20.956273 | orchestrator | 2026-04-11 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:24.009590 | orchestrator | 2026-04-11 02:51:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:24.011928 | orchestrator | 2026-04-11 02:51:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:24.012110 | orchestrator | 2026-04-11 02:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:27.063005 | orchestrator | 2026-04-11 02:51:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:27.064466 | orchestrator | 2026-04-11 02:51:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:27.064557 | orchestrator | 2026-04-11 02:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:30.110325 | orchestrator | 2026-04-11 02:51:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:30.114152 | orchestrator | 2026-04-11 02:51:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:30.114295 | orchestrator | 2026-04-11 02:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:33.168929 | orchestrator | 2026-04-11 02:51:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:51:33.170681 | orchestrator | 2026-04-11 02:51:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:51:33.170757 | orchestrator | 2026-04-11 02:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:51:36.219549 | orchestrator | 2026-04-11 02:51:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:53:36.323254 | orchestrator | 2026-04-11 02:53:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:53:36.323356 | orchestrator | 2026-04-11 02:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:53:39.367095 | orchestrator | 2026-04-11 02:53:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:53:39.368198 | orchestrator | 2026-04-11 02:53:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:53:39.368262 | orchestrator | 2026-04-11 02:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:53:42.413448 | orchestrator | 2026-04-11 02:53:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:53:42.416855 | orchestrator | 2026-04-11 02:53:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:53:42.416914 | orchestrator | 2026-04-11 02:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:53:45.461290 | orchestrator | 2026-04-11 02:53:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:53:45.464214 | orchestrator | 2026-04-11 02:53:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:53:45.464275 | orchestrator | 2026-04-11 02:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:53:48.497508 | orchestrator | 2026-04-11 02:53:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:53:48.498932 | orchestrator | 2026-04-11 02:53:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:53:48.499049 | orchestrator | 2026-04-11 02:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:53:51.533520 | orchestrator | 2026-04-11 02:53:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:53:51.535181 | orchestrator | 2026-04-11 02:53:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:53:51.535236 | orchestrator | 2026-04-11 02:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:53:54.577866 | orchestrator | 2026-04-11 02:53:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:53:54.580935 | orchestrator | 2026-04-11 02:53:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:53:54.581031 | orchestrator | 2026-04-11 02:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:53:57.624515 | orchestrator | 2026-04-11 02:53:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:53:57.625862 | orchestrator | 2026-04-11 02:53:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:53:57.625995 | orchestrator | 2026-04-11 02:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:00.663420 | orchestrator | 2026-04-11 02:54:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:00.665292 | orchestrator | 2026-04-11 02:54:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:00.665425 | orchestrator | 2026-04-11 02:54:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:03.710435 | orchestrator | 2026-04-11 02:54:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:03.712074 | orchestrator | 2026-04-11 02:54:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:03.712337 | orchestrator | 2026-04-11 02:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:06.757127 | orchestrator | 2026-04-11 02:54:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:06.759188 | orchestrator | 2026-04-11 02:54:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:06.759267 | orchestrator | 2026-04-11 02:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:09.793943 | orchestrator | 2026-04-11 02:54:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:09.795259 | orchestrator | 2026-04-11 02:54:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:09.795293 | orchestrator | 2026-04-11 02:54:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:12.830339 | orchestrator | 2026-04-11 02:54:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:12.831513 | orchestrator | 2026-04-11 02:54:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:12.831569 | orchestrator | 2026-04-11 02:54:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:15.869208 | orchestrator | 2026-04-11 02:54:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:15.870307 | orchestrator | 2026-04-11 02:54:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:15.870370 | orchestrator | 2026-04-11 02:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:18.907168 | orchestrator | 2026-04-11 02:54:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:18.908827 | orchestrator | 2026-04-11 02:54:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:18.908958 | orchestrator | 2026-04-11 02:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:21.947471 | orchestrator | 2026-04-11 02:54:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:21.947942 | orchestrator | 2026-04-11 02:54:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:21.947978 | orchestrator | 2026-04-11 02:54:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:24.986638 | orchestrator | 2026-04-11 02:54:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:24.987785 | orchestrator | 2026-04-11 02:54:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:24.987831 | orchestrator | 2026-04-11 02:54:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:28.024532 | orchestrator | 2026-04-11 02:54:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:28.025510 | orchestrator | 2026-04-11 02:54:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:28.025569 | orchestrator | 2026-04-11 02:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:31.058651 | orchestrator | 2026-04-11 02:54:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:31.060027 | orchestrator | 2026-04-11 02:54:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:31.060067 | orchestrator | 2026-04-11 02:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:34.098155 | orchestrator | 2026-04-11 02:54:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:34.098744 | orchestrator | 2026-04-11 02:54:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:34.098966 | orchestrator | 2026-04-11 02:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:37.147341 | orchestrator | 2026-04-11 02:54:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:37.149227 | orchestrator | 2026-04-11 02:54:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:37.149264 | orchestrator | 2026-04-11 02:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:40.194115 | orchestrator | 2026-04-11 02:54:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:40.197473 | orchestrator | 2026-04-11 02:54:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:40.197544 | orchestrator | 2026-04-11 02:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:43.238393 | orchestrator | 2026-04-11 02:54:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:43.240942 | orchestrator | 2026-04-11 02:54:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:43.240995 | orchestrator | 2026-04-11 02:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:46.294218 | orchestrator | 2026-04-11 02:54:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:46.295836 | orchestrator | 2026-04-11 02:54:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:46.295926 | orchestrator | 2026-04-11 02:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:49.332944 | orchestrator | 2026-04-11 02:54:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:49.334644 | orchestrator | 2026-04-11 02:54:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:49.334733 | orchestrator | 2026-04-11 02:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:52.378496 | orchestrator | 2026-04-11 02:54:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:52.380000 | orchestrator | 2026-04-11 02:54:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:52.380097 | orchestrator | 2026-04-11 02:54:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:55.427840 | orchestrator | 2026-04-11 02:54:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:55.430417 | orchestrator | 2026-04-11 02:54:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:55.430508 | orchestrator | 2026-04-11 02:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:54:58.475185 | orchestrator | 2026-04-11 02:54:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:54:58.477995 | orchestrator | 2026-04-11 02:54:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:54:58.478133 | orchestrator | 2026-04-11 02:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:01.523837 | orchestrator | 2026-04-11 02:55:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:01.526797 | orchestrator | 2026-04-11 02:55:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:01.526838 | orchestrator | 2026-04-11 02:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:04.567165 | orchestrator | 2026-04-11 02:55:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:04.568661 | orchestrator | 2026-04-11 02:55:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:04.568703 | orchestrator | 2026-04-11 02:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:07.615881 | orchestrator | 2026-04-11 02:55:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:07.617120 | orchestrator | 2026-04-11 02:55:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:07.617236 | orchestrator | 2026-04-11 02:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:10.665779 | orchestrator | 2026-04-11 02:55:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:10.667413 | orchestrator | 2026-04-11 02:55:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:10.667461 | orchestrator | 2026-04-11 02:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:13.714990 | orchestrator | 2026-04-11 02:55:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:13.716622 | orchestrator | 2026-04-11 02:55:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:13.716682 | orchestrator | 2026-04-11 02:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:16.762593 | orchestrator | 2026-04-11 02:55:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:16.765209 | orchestrator | 2026-04-11 02:55:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:16.765276 | orchestrator | 2026-04-11 02:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:19.806233 | orchestrator | 2026-04-11 02:55:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:19.807140 | orchestrator | 2026-04-11 02:55:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:19.807193 | orchestrator | 2026-04-11 02:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:22.853868 | orchestrator | 2026-04-11 02:55:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:22.855548 | orchestrator | 2026-04-11 02:55:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:22.855601 | orchestrator | 2026-04-11 02:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:25.900115 | orchestrator | 2026-04-11 02:55:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:25.902851 | orchestrator | 2026-04-11 02:55:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:25.903661 | orchestrator | 2026-04-11 02:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:28.956961 | orchestrator | 2026-04-11 02:55:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:28.958820 | orchestrator | 2026-04-11 02:55:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:28.959258 | orchestrator | 2026-04-11 02:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:32.008667 | orchestrator | 2026-04-11 02:55:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:32.010791 | orchestrator | 2026-04-11 02:55:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:32.010878 | orchestrator | 2026-04-11 02:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:35.049798 | orchestrator | 2026-04-11 02:55:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:35.050892 | orchestrator | 2026-04-11 02:55:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:35.051467 | orchestrator | 2026-04-11 02:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:38.097478 | orchestrator | 2026-04-11 02:55:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:38.100390 | orchestrator | 2026-04-11 02:55:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:38.100455 | orchestrator | 2026-04-11 02:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:41.144425 | orchestrator | 2026-04-11 02:55:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:41.145970 | orchestrator | 2026-04-11 02:55:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:41.146011 | orchestrator | 2026-04-11 02:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:44.189908 | orchestrator | 2026-04-11 02:55:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:44.191169 | orchestrator | 2026-04-11 02:55:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:44.191195 | orchestrator | 2026-04-11 02:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:47.238786 | orchestrator | 2026-04-11 02:55:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:47.239714 | orchestrator | 2026-04-11 02:55:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:47.239783 | orchestrator | 2026-04-11 02:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:50.285743 | orchestrator | 2026-04-11 02:55:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:50.287147 | orchestrator | 2026-04-11 02:55:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:50.287344 | orchestrator | 2026-04-11 02:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:53.332148 | orchestrator | 2026-04-11 02:55:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:53.333607 | orchestrator | 2026-04-11 02:55:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:53.333668 | orchestrator | 2026-04-11 02:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:56.384257 | orchestrator | 2026-04-11 02:55:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:56.386280 | orchestrator | 2026-04-11 02:55:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:56.386375 | orchestrator | 2026-04-11 02:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:55:59.426882 | orchestrator | 2026-04-11 02:55:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:55:59.426955 | orchestrator | 2026-04-11 02:55:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:55:59.426977 | orchestrator | 2026-04-11 02:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:02.476310 | orchestrator | 2026-04-11 02:56:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:02.477923 | orchestrator | 2026-04-11 02:56:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:02.478065 | orchestrator | 2026-04-11 02:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:05.521070 | orchestrator | 2026-04-11 02:56:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:05.522840 | orchestrator | 2026-04-11 02:56:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:05.522925 | orchestrator | 2026-04-11 02:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:08.567328 | orchestrator | 2026-04-11 02:56:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:08.568931 | orchestrator | 2026-04-11 02:56:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:08.568985 | orchestrator | 2026-04-11 02:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:11.613478 | orchestrator | 2026-04-11 02:56:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:11.615356 | orchestrator | 2026-04-11 02:56:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:11.615401 | orchestrator | 2026-04-11 02:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:14.663078 | orchestrator | 2026-04-11 02:56:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:14.664788 | orchestrator | 2026-04-11 02:56:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:14.664931 | orchestrator | 2026-04-11 02:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:17.715893 | orchestrator | 2026-04-11 02:56:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:17.716663 | orchestrator | 2026-04-11 02:56:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:17.716699 | orchestrator | 2026-04-11 02:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:20.761569 | orchestrator | 2026-04-11 02:56:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:20.762513 | orchestrator | 2026-04-11 02:56:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:20.762578 | orchestrator | 2026-04-11 02:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:23.808184 | orchestrator | 2026-04-11 02:56:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:23.809497 | orchestrator | 2026-04-11 02:56:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:23.809537 | orchestrator | 2026-04-11 02:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:26.857739 | orchestrator | 2026-04-11 02:56:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:26.859027 | orchestrator | 2026-04-11 02:56:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:26.859106 | orchestrator | 2026-04-11 02:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:29.907214 | orchestrator | 2026-04-11 02:56:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:29.909411 | orchestrator | 2026-04-11 02:56:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:29.909501 | orchestrator | 2026-04-11 02:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:32.960591 | orchestrator | 2026-04-11 02:56:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:32.962411 | orchestrator | 2026-04-11 02:56:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:32.962461 | orchestrator | 2026-04-11 02:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:36.009624 | orchestrator | 2026-04-11 02:56:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:36.011912 | orchestrator | 2026-04-11 02:56:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:36.012001 | orchestrator | 2026-04-11 02:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:39.053852 | orchestrator | 2026-04-11 02:56:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:39.056066 | orchestrator | 2026-04-11 02:56:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:39.056405 | orchestrator | 2026-04-11 02:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:42.095051 | orchestrator | 2026-04-11 02:56:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:42.097162 | orchestrator | 2026-04-11 02:56:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:42.097225 | orchestrator | 2026-04-11 02:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:45.140981 | orchestrator | 2026-04-11 02:56:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:45.144366 | orchestrator | 2026-04-11 02:56:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:45.144434 | orchestrator | 2026-04-11 02:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:48.190764 | orchestrator | 2026-04-11 02:56:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:48.193091 | orchestrator | 2026-04-11 02:56:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:48.193206 | orchestrator | 2026-04-11 02:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:51.237075 | orchestrator | 2026-04-11 02:56:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:51.239939 | orchestrator | 2026-04-11 02:56:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:51.240009 | orchestrator | 2026-04-11 02:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:54.286466 | orchestrator | 2026-04-11 02:56:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:54.288567 | orchestrator | 2026-04-11 02:56:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:54.288651 | orchestrator | 2026-04-11 02:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:56:57.332109 | orchestrator | 2026-04-11 02:56:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:56:57.333948 | orchestrator | 2026-04-11 02:56:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:56:57.333986 | orchestrator | 2026-04-11 02:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:00.373264 | orchestrator | 2026-04-11 02:57:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:00.374004 | orchestrator | 2026-04-11 02:57:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:00.374077 | orchestrator | 2026-04-11 02:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:03.429678 | orchestrator | 2026-04-11 02:57:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:03.431614 | orchestrator | 2026-04-11 02:57:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:03.431677 | orchestrator | 2026-04-11 02:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:06.479433 | orchestrator | 2026-04-11 02:57:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:06.480818 | orchestrator | 2026-04-11 02:57:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:06.480894 | orchestrator | 2026-04-11 02:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:09.525424 | orchestrator | 2026-04-11 02:57:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:09.527678 | orchestrator | 2026-04-11 02:57:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:09.527743 | orchestrator | 2026-04-11 02:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:12.573547 | orchestrator | 2026-04-11 02:57:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:12.574325 | orchestrator | 2026-04-11 02:57:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:12.574769 | orchestrator | 2026-04-11 02:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:15.610930 | orchestrator | 2026-04-11 02:57:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:15.612261 | orchestrator | 2026-04-11 02:57:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:15.612304 | orchestrator | 2026-04-11 02:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:18.664024 | orchestrator | 2026-04-11 02:57:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:18.666809 | orchestrator | 2026-04-11 02:57:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:18.666885 | orchestrator | 2026-04-11 02:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:21.717731 | orchestrator | 2026-04-11 02:57:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:21.720573 | orchestrator | 2026-04-11 02:57:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:21.720654 | orchestrator | 2026-04-11 02:57:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:24.769420 | orchestrator | 2026-04-11 02:57:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:24.771899 | orchestrator | 2026-04-11 02:57:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:24.772036 | orchestrator | 2026-04-11 02:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:27.819157 | orchestrator | 2026-04-11 02:57:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:27.822750 | orchestrator | 2026-04-11 02:57:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:27.822828 | orchestrator | 2026-04-11 02:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:30.869630 | orchestrator | 2026-04-11 02:57:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:30.871500 | orchestrator | 2026-04-11 02:57:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:30.871573 | orchestrator | 2026-04-11 02:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:33.919212 | orchestrator | 2026-04-11 02:57:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:33.921306 | orchestrator | 2026-04-11 02:57:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:33.921356 | orchestrator | 2026-04-11 02:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:36.966397 | orchestrator | 2026-04-11 02:57:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:36.968967 | orchestrator | 2026-04-11 02:57:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:36.969054 | orchestrator | 2026-04-11 02:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:40.018873 | orchestrator | 2026-04-11 02:57:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:40.019779 | orchestrator | 2026-04-11 02:57:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:40.020534 | orchestrator | 2026-04-11 02:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:43.070479 | orchestrator | 2026-04-11 02:57:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:43.072773 | orchestrator | 2026-04-11 02:57:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:43.073037 | orchestrator | 2026-04-11 02:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:46.118341 | orchestrator | 2026-04-11 02:57:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:46.120310 | orchestrator | 2026-04-11 02:57:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:46.120359 | orchestrator | 2026-04-11 02:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:49.165005 | orchestrator | 2026-04-11 02:57:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:49.168045 | orchestrator | 2026-04-11 02:57:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:49.168315 | orchestrator | 2026-04-11 02:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:52.214986 | orchestrator | 2026-04-11 02:57:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:52.216535 | orchestrator | 2026-04-11 02:57:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:52.216577 | orchestrator | 2026-04-11 02:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:55.257133 | orchestrator | 2026-04-11 02:57:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:55.258931 | orchestrator | 2026-04-11 02:57:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:55.258982 | orchestrator | 2026-04-11 02:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:57:58.303715 | orchestrator | 2026-04-11 02:57:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:57:58.307463 | orchestrator | 2026-04-11 02:57:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:57:58.307626 | orchestrator | 2026-04-11 02:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:01.355061 | orchestrator | 2026-04-11 02:58:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:01.359694 | orchestrator | 2026-04-11 02:58:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:01.359770 | orchestrator | 2026-04-11 02:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:04.410109 | orchestrator | 2026-04-11 02:58:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:04.411096 | orchestrator | 2026-04-11 02:58:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:04.411187 | orchestrator | 2026-04-11 02:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:07.455689 | orchestrator | 2026-04-11 02:58:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:07.455846 | orchestrator | 2026-04-11 02:58:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:07.455923 | orchestrator | 2026-04-11 02:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:10.498587 | orchestrator | 2026-04-11 02:58:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:10.501422 | orchestrator | 2026-04-11 02:58:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:10.501503 | orchestrator | 2026-04-11 02:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:13.551707 | orchestrator | 2026-04-11 02:58:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:13.553174 | orchestrator | 2026-04-11 02:58:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:13.553270 | orchestrator | 2026-04-11 02:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:16.601467 | orchestrator | 2026-04-11 02:58:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:16.604161 | orchestrator | 2026-04-11 02:58:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:16.604302 | orchestrator | 2026-04-11 02:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:19.648722 | orchestrator | 2026-04-11 02:58:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:19.650863 | orchestrator | 2026-04-11 02:58:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:19.650916 | orchestrator | 2026-04-11 02:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:22.695652 | orchestrator | 2026-04-11 02:58:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:22.697652 | orchestrator | 2026-04-11 02:58:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:22.697688 | orchestrator | 2026-04-11 02:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:25.731662 | orchestrator | 2026-04-11 02:58:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:25.734188 | orchestrator | 2026-04-11 02:58:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:25.734383 | orchestrator | 2026-04-11 02:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:28.779249 | orchestrator | 2026-04-11 02:58:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:28.781182 | orchestrator | 2026-04-11 02:58:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:28.781500 | orchestrator | 2026-04-11 02:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:31.820491 | orchestrator | 2026-04-11 02:58:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:31.822496 | orchestrator | 2026-04-11 02:58:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:31.822549 | orchestrator | 2026-04-11 02:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:34.861810 | orchestrator | 2026-04-11 02:58:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:34.863200 | orchestrator | 2026-04-11 02:58:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:34.863254 | orchestrator | 2026-04-11 02:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:37.905768 | orchestrator | 2026-04-11 02:58:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:37.907178 | orchestrator | 2026-04-11 02:58:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:37.907327 | orchestrator | 2026-04-11 02:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:40.951828 | orchestrator | 2026-04-11 02:58:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:40.955459 | orchestrator | 2026-04-11 02:58:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:40.955532 | orchestrator | 2026-04-11 02:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:43.998919 | orchestrator | 2026-04-11 02:58:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:44.001704 | orchestrator | 2026-04-11 02:58:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:44.001837 | orchestrator | 2026-04-11 02:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:47.038392 | orchestrator | 2026-04-11 02:58:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:47.040750 | orchestrator | 2026-04-11 02:58:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:47.040871 | orchestrator | 2026-04-11 02:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:50.084024 | orchestrator | 2026-04-11 02:58:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:50.086655 | orchestrator | 2026-04-11 02:58:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:50.086746 | orchestrator | 2026-04-11 02:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:53.125864 | orchestrator | 2026-04-11 02:58:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:53.127679 | orchestrator | 2026-04-11 02:58:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:53.127861 | orchestrator | 2026-04-11 02:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:56.172439 | orchestrator | 2026-04-11 02:58:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:56.173891 | orchestrator | 2026-04-11 02:58:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:56.174089 | orchestrator | 2026-04-11 02:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:58:59.215159 | orchestrator | 2026-04-11 02:58:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:58:59.216554 | orchestrator | 2026-04-11 02:58:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:58:59.216609 | orchestrator | 2026-04-11 02:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:02.254839 | orchestrator | 2026-04-11 02:59:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:02.256779 | orchestrator | 2026-04-11 02:59:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:02.256825 | orchestrator | 2026-04-11 02:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:05.305613 | orchestrator | 2026-04-11 02:59:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:05.308015 | orchestrator | 2026-04-11 02:59:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:05.308091 | orchestrator | 2026-04-11 02:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:08.352570 | orchestrator | 2026-04-11 02:59:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:08.354335 | orchestrator | 2026-04-11 02:59:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:08.354364 | orchestrator | 2026-04-11 02:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:11.394081 | orchestrator | 2026-04-11 02:59:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:11.397220 | orchestrator | 2026-04-11 02:59:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:11.397380 | orchestrator | 2026-04-11 02:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:14.448050 | orchestrator | 2026-04-11 02:59:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:14.450363 | orchestrator | 2026-04-11 02:59:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:14.450458 | orchestrator | 2026-04-11 02:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:17.498135 | orchestrator | 2026-04-11 02:59:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:17.499758 | orchestrator | 2026-04-11 02:59:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:17.499921 | orchestrator | 2026-04-11 02:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:20.541653 | orchestrator | 2026-04-11 02:59:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:20.542557 | orchestrator | 2026-04-11 02:59:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:20.542583 | orchestrator | 2026-04-11 02:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:23.586523 | orchestrator | 2026-04-11 02:59:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:23.588382 | orchestrator | 2026-04-11 02:59:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:23.588456 | orchestrator | 2026-04-11 02:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:26.621974 | orchestrator | 2026-04-11 02:59:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:26.623481 | orchestrator | 2026-04-11 02:59:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:26.623536 | orchestrator | 2026-04-11 02:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:29.661953 | orchestrator | 2026-04-11 02:59:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:29.664773 | orchestrator | 2026-04-11 02:59:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:29.664837 | orchestrator | 2026-04-11 02:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:32.702733 | orchestrator | 2026-04-11 02:59:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:32.702972 | orchestrator | 2026-04-11 02:59:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:32.703756 | orchestrator | 2026-04-11 02:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:35.747859 | orchestrator | 2026-04-11 02:59:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:35.750265 | orchestrator | 2026-04-11 02:59:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:35.750394 | orchestrator | 2026-04-11 02:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:38.802668 | orchestrator | 2026-04-11 02:59:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:38.803820 | orchestrator | 2026-04-11 02:59:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:38.803894 | orchestrator | 2026-04-11 02:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:41.850237 | orchestrator | 2026-04-11 02:59:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:41.854074 | orchestrator | 2026-04-11 02:59:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:41.854166 | orchestrator | 2026-04-11 02:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:44.897721 | orchestrator | 2026-04-11 02:59:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:44.900265 | orchestrator | 2026-04-11 02:59:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:44.900369 | orchestrator | 2026-04-11 02:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:47.948844 | orchestrator | 2026-04-11 02:59:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:47.949139 | orchestrator | 2026-04-11 02:59:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:47.949176 | orchestrator | 2026-04-11 02:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:51.000482 | orchestrator | 2026-04-11 02:59:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:51.001822 | orchestrator | 2026-04-11 02:59:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:51.002198 | orchestrator | 2026-04-11 02:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:54.057481 | orchestrator | 2026-04-11 02:59:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:54.058232 | orchestrator | 2026-04-11 02:59:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:54.058337 | orchestrator | 2026-04-11 02:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 02:59:57.094232 | orchestrator | 2026-04-11 02:59:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 02:59:57.095141 | orchestrator | 2026-04-11 02:59:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 02:59:57.095177 | orchestrator | 2026-04-11 02:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:00.135718 | orchestrator | 2026-04-11 03:00:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:00.137668 | orchestrator | 2026-04-11 03:00:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:00.137722 | orchestrator | 2026-04-11 03:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:03.191047 | orchestrator | 2026-04-11 03:00:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:03.192129 | orchestrator | 2026-04-11 03:00:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:03.192169 | orchestrator | 2026-04-11 03:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:06.238839 | orchestrator | 2026-04-11 03:00:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:06.240936 | orchestrator | 2026-04-11 03:00:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:06.240976 | orchestrator | 2026-04-11 03:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:09.286273 | orchestrator | 2026-04-11 03:00:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:09.288255 | orchestrator | 2026-04-11 03:00:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:09.288298 | orchestrator | 2026-04-11 03:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:12.334231 | orchestrator | 2026-04-11 03:00:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:12.335668 | orchestrator | 2026-04-11 03:00:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:12.335724 | orchestrator | 2026-04-11 03:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:15.385650 | orchestrator | 2026-04-11 03:00:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:15.387123 | orchestrator | 2026-04-11 03:00:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:15.387281 | orchestrator | 2026-04-11 03:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:18.438848 | orchestrator | 2026-04-11 03:00:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:18.440783 | orchestrator | 2026-04-11 03:00:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:18.440825 | orchestrator | 2026-04-11 03:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:21.497268 | orchestrator | 2026-04-11 03:00:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:21.498438 | orchestrator | 2026-04-11 03:00:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:21.498632 | orchestrator | 2026-04-11 03:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:24.540796 | orchestrator | 2026-04-11 03:00:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:24.542087 | orchestrator | 2026-04-11 03:00:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:24.542200 | orchestrator | 2026-04-11 03:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:27.588025 | orchestrator | 2026-04-11 03:00:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:27.589133 | orchestrator | 2026-04-11 03:00:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:27.589204 | orchestrator | 2026-04-11 03:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:30.624014 | orchestrator | 2026-04-11 03:00:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:30.624681 | orchestrator | 2026-04-11 03:00:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:30.624721 | orchestrator | 2026-04-11 03:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:33.659242 | orchestrator | 2026-04-11 03:00:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:33.660413 | orchestrator | 2026-04-11 03:00:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:33.660456 | orchestrator | 2026-04-11 03:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:36.707692 | orchestrator | 2026-04-11 03:00:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:36.710995 | orchestrator | 2026-04-11 03:00:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:36.711110 | orchestrator | 2026-04-11 03:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:39.768436 | orchestrator | 2026-04-11 03:00:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:39.770736 | orchestrator | 2026-04-11 03:00:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:39.770832 | orchestrator | 2026-04-11 03:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:42.823585 | orchestrator | 2026-04-11 03:00:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:42.826708 | orchestrator | 2026-04-11 03:00:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:42.826828 | orchestrator | 2026-04-11 03:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:45.874928 | orchestrator | 2026-04-11 03:00:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:45.876759 | orchestrator | 2026-04-11 03:00:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:45.876818 | orchestrator | 2026-04-11 03:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:48.923475 | orchestrator | 2026-04-11 03:00:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:48.925719 | orchestrator | 2026-04-11 03:00:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:48.925822 | orchestrator | 2026-04-11 03:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:51.979710 | orchestrator | 2026-04-11 03:00:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:51.981158 | orchestrator | 2026-04-11 03:00:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:51.981235 | orchestrator | 2026-04-11 03:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:55.032930 | orchestrator | 2026-04-11 03:00:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:55.034915 | orchestrator | 2026-04-11 03:00:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:55.034964 | orchestrator | 2026-04-11 03:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:00:58.076327 | orchestrator | 2026-04-11 03:00:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:00:58.077629 | orchestrator | 2026-04-11 03:00:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:00:58.077684 | orchestrator | 2026-04-11 03:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:01.124587 | orchestrator | 2026-04-11 03:01:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:01.126209 | orchestrator | 2026-04-11 03:01:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:01.126402 | orchestrator | 2026-04-11 03:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:04.177334 | orchestrator | 2026-04-11 03:01:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:04.179695 | orchestrator | 2026-04-11 03:01:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:04.179748 | orchestrator | 2026-04-11 03:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:07.229431 | orchestrator | 2026-04-11 03:01:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:07.230609 | orchestrator | 2026-04-11 03:01:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:07.230646 | orchestrator | 2026-04-11 03:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:10.275490 | orchestrator | 2026-04-11 03:01:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:10.277896 | orchestrator | 2026-04-11 03:01:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:10.277985 | orchestrator | 2026-04-11 03:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:13.333571 | orchestrator | 2026-04-11 03:01:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:13.336345 | orchestrator | 2026-04-11 03:01:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:13.336440 | orchestrator | 2026-04-11 03:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:16.378887 | orchestrator | 2026-04-11 03:01:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:16.379964 | orchestrator | 2026-04-11 03:01:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:16.380018 | orchestrator | 2026-04-11 03:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:19.431182 | orchestrator | 2026-04-11 03:01:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:19.432609 | orchestrator | 2026-04-11 03:01:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:19.432663 | orchestrator | 2026-04-11 03:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:22.473680 | orchestrator | 2026-04-11 03:01:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:22.476000 | orchestrator | 2026-04-11 03:01:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:22.476071 | orchestrator | 2026-04-11 03:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:25.513221 | orchestrator | 2026-04-11 03:01:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:25.515832 | orchestrator | 2026-04-11 03:01:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:25.515954 | orchestrator | 2026-04-11 03:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:28.566853 | orchestrator | 2026-04-11 03:01:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:28.569533 | orchestrator | 2026-04-11 03:01:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:28.569639 | orchestrator | 2026-04-11 03:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:31.612324 | orchestrator | 2026-04-11 03:01:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:31.614204 | orchestrator | 2026-04-11 03:01:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:31.614300 | orchestrator | 2026-04-11 03:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:34.661685 | orchestrator | 2026-04-11 03:01:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:34.663975 | orchestrator | 2026-04-11 03:01:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:34.664027 | orchestrator | 2026-04-11 03:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:37.717620 | orchestrator | 2026-04-11 03:01:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:37.718993 | orchestrator | 2026-04-11 03:01:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:37.719038 | orchestrator | 2026-04-11 03:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:40.772527 | orchestrator | 2026-04-11 03:01:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:40.775662 | orchestrator | 2026-04-11 03:01:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:40.775726 | orchestrator | 2026-04-11 03:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:43.824426 | orchestrator | 2026-04-11 03:01:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:43.826603 | orchestrator | 2026-04-11 03:01:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:43.826666 | orchestrator | 2026-04-11 03:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:46.869153 | orchestrator | 2026-04-11 03:01:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:46.870446 | orchestrator | 2026-04-11 03:01:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:46.870822 | orchestrator | 2026-04-11 03:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:49.914194 | orchestrator | 2026-04-11 03:01:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:49.916162 | orchestrator | 2026-04-11 03:01:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:49.916207 | orchestrator | 2026-04-11 03:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:52.967602 | orchestrator | 2026-04-11 03:01:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:52.970330 | orchestrator | 2026-04-11 03:01:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:52.970461 | orchestrator | 2026-04-11 03:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:56.023782 | orchestrator | 2026-04-11 03:01:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:56.024796 | orchestrator | 2026-04-11 03:01:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:56.024843 | orchestrator | 2026-04-11 03:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:01:59.072971 | orchestrator | 2026-04-11 03:01:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:01:59.073791 | orchestrator | 2026-04-11 03:01:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:01:59.073823 | orchestrator | 2026-04-11 03:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:02.121535 | orchestrator | 2026-04-11 03:02:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:02.123144 | orchestrator | 2026-04-11 03:02:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:02.123204 | orchestrator | 2026-04-11 03:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:05.173437 | orchestrator | 2026-04-11 03:02:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:05.175587 | orchestrator | 2026-04-11 03:02:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:05.175652 | orchestrator | 2026-04-11 03:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:08.230712 | orchestrator | 2026-04-11 03:02:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:08.232540 | orchestrator | 2026-04-11 03:02:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:08.232601 | orchestrator | 2026-04-11 03:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:11.280960 | orchestrator | 2026-04-11 03:02:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:11.283564 | orchestrator | 2026-04-11 03:02:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:11.283662 | orchestrator | 2026-04-11 03:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:14.325182 | orchestrator | 2026-04-11 03:02:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:14.328039 | orchestrator | 2026-04-11 03:02:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:14.328535 | orchestrator | 2026-04-11 03:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:17.377587 | orchestrator | 2026-04-11 03:02:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:17.379340 | orchestrator | 2026-04-11 03:02:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:17.379450 | orchestrator | 2026-04-11 03:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:20.422982 | orchestrator | 2026-04-11 03:02:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:20.424306 | orchestrator | 2026-04-11 03:02:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:20.425007 | orchestrator | 2026-04-11 03:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:23.473833 | orchestrator | 2026-04-11 03:02:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:23.476065 | orchestrator | 2026-04-11 03:02:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:23.476127 | orchestrator | 2026-04-11 03:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:26.528198 | orchestrator | 2026-04-11 03:02:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:26.532547 | orchestrator | 2026-04-11 03:02:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:26.532630 | orchestrator | 2026-04-11 03:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:29.581506 | orchestrator | 2026-04-11 03:02:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:29.583722 | orchestrator | 2026-04-11 03:02:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:29.583949 | orchestrator | 2026-04-11 03:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:32.632499 | orchestrator | 2026-04-11 03:02:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:32.633557 | orchestrator | 2026-04-11 03:02:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:32.633614 | orchestrator | 2026-04-11 03:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:35.682007 | orchestrator | 2026-04-11 03:02:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:35.683142 | orchestrator | 2026-04-11 03:02:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:35.683241 | orchestrator | 2026-04-11 03:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:38.734764 | orchestrator | 2026-04-11 03:02:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:38.736709 | orchestrator | 2026-04-11 03:02:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:38.736789 | orchestrator | 2026-04-11 03:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:41.785281 | orchestrator | 2026-04-11 03:02:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:41.787194 | orchestrator | 2026-04-11 03:02:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:41.787251 | orchestrator | 2026-04-11 03:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:44.837856 | orchestrator | 2026-04-11 03:02:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:44.838708 | orchestrator | 2026-04-11 03:02:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:44.838758 | orchestrator | 2026-04-11 03:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:47.888281 | orchestrator | 2026-04-11 03:02:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:47.889917 | orchestrator | 2026-04-11 03:02:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:47.889970 | orchestrator | 2026-04-11 03:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:50.938075 | orchestrator | 2026-04-11 03:02:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:50.940026 | orchestrator | 2026-04-11 03:02:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:50.940089 | orchestrator | 2026-04-11 03:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:53.996383 | orchestrator | 2026-04-11 03:02:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:53.996625 | orchestrator | 2026-04-11 03:02:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:53.996664 | orchestrator | 2026-04-11 03:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:02:57.045676 | orchestrator | 2026-04-11 03:02:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:02:57.046368 | orchestrator | 2026-04-11 03:02:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:02:57.046556 | orchestrator | 2026-04-11 03:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:00.087927 | orchestrator | 2026-04-11 03:03:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:00.091022 | orchestrator | 2026-04-11 03:03:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:00.091136 | orchestrator | 2026-04-11 03:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:03.140043 | orchestrator | 2026-04-11 03:03:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:03.141851 | orchestrator | 2026-04-11 03:03:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:03.141904 | orchestrator | 2026-04-11 03:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:06.186247 | orchestrator | 2026-04-11 03:03:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:06.187379 | orchestrator | 2026-04-11 03:03:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:06.187428 | orchestrator | 2026-04-11 03:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:09.235228 | orchestrator | 2026-04-11 03:03:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:09.236284 | orchestrator | 2026-04-11 03:03:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:09.236924 | orchestrator | 2026-04-11 03:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:12.278124 | orchestrator | 2026-04-11 03:03:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:12.279930 | orchestrator | 2026-04-11 03:03:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:12.280000 | orchestrator | 2026-04-11 03:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:15.329669 | orchestrator | 2026-04-11 03:03:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:15.330977 | orchestrator | 2026-04-11 03:03:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:15.331152 | orchestrator | 2026-04-11 03:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:18.386520 | orchestrator | 2026-04-11 03:03:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:18.388734 | orchestrator | 2026-04-11 03:03:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:18.388796 | orchestrator | 2026-04-11 03:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:21.438644 | orchestrator | 2026-04-11 03:03:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:21.440303 | orchestrator | 2026-04-11 03:03:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:21.440351 | orchestrator | 2026-04-11 03:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:24.491323 | orchestrator | 2026-04-11 03:03:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:24.492716 | orchestrator | 2026-04-11 03:03:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:24.492755 | orchestrator | 2026-04-11 03:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:27.542210 | orchestrator | 2026-04-11 03:03:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:27.543444 | orchestrator | 2026-04-11 03:03:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:27.543541 | orchestrator | 2026-04-11 03:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:30.590305 | orchestrator | 2026-04-11 03:03:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:30.592301 | orchestrator | 2026-04-11 03:03:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:30.592405 | orchestrator | 2026-04-11 03:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:33.633148 | orchestrator | 2026-04-11 03:03:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:33.634850 | orchestrator | 2026-04-11 03:03:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:33.634911 | orchestrator | 2026-04-11 03:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:36.684150 | orchestrator | 2026-04-11 03:03:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:36.685821 | orchestrator | 2026-04-11 03:03:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:36.685879 | orchestrator | 2026-04-11 03:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:39.734842 | orchestrator | 2026-04-11 03:03:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:39.736748 | orchestrator | 2026-04-11 03:03:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:39.736863 | orchestrator | 2026-04-11 03:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:42.790807 | orchestrator | 2026-04-11 03:03:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:42.792556 | orchestrator | 2026-04-11 03:03:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:42.792614 | orchestrator | 2026-04-11 03:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:45.840816 | orchestrator | 2026-04-11 03:03:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:45.842757 | orchestrator | 2026-04-11 03:03:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:45.844236 | orchestrator | 2026-04-11 03:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:48.899031 | orchestrator | 2026-04-11 03:03:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:48.900810 | orchestrator | 2026-04-11 03:03:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:48.900860 | orchestrator | 2026-04-11 03:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:51.946346 | orchestrator | 2026-04-11 03:03:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:51.949849 | orchestrator | 2026-04-11 03:03:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:51.949941 | orchestrator | 2026-04-11 03:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:55.000239 | orchestrator | 2026-04-11 03:03:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:55.001183 | orchestrator | 2026-04-11 03:03:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:55.001286 | orchestrator | 2026-04-11 03:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:03:58.055010 | orchestrator | 2026-04-11 03:03:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:03:58.056513 | orchestrator | 2026-04-11 03:03:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:03:58.056570 | orchestrator | 2026-04-11 03:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:01.095312 | orchestrator | 2026-04-11 03:04:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:01.096395 | orchestrator | 2026-04-11 03:04:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:01.096560 | orchestrator | 2026-04-11 03:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:04.143922 | orchestrator | 2026-04-11 03:04:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:04.146423 | orchestrator | 2026-04-11 03:04:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:04.146521 | orchestrator | 2026-04-11 03:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:07.194108 | orchestrator | 2026-04-11 03:04:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:07.195738 | orchestrator | 2026-04-11 03:04:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:07.195785 | orchestrator | 2026-04-11 03:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:10.236848 | orchestrator | 2026-04-11 03:04:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:10.237951 | orchestrator | 2026-04-11 03:04:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:10.237989 | orchestrator | 2026-04-11 03:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:13.284969 | orchestrator | 2026-04-11 03:04:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:13.287293 | orchestrator | 2026-04-11 03:04:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:13.287439 | orchestrator | 2026-04-11 03:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:16.341163 | orchestrator | 2026-04-11 03:04:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:16.342911 | orchestrator | 2026-04-11 03:04:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:16.343005 | orchestrator | 2026-04-11 03:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:19.395576 | orchestrator | 2026-04-11 03:04:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:19.396733 | orchestrator | 2026-04-11 03:04:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:19.396791 | orchestrator | 2026-04-11 03:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:22.462000 | orchestrator | 2026-04-11 03:04:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:22.462797 | orchestrator | 2026-04-11 03:04:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:22.463175 | orchestrator | 2026-04-11 03:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:25.513001 | orchestrator | 2026-04-11 03:04:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:25.514239 | orchestrator | 2026-04-11 03:04:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:25.514277 | orchestrator | 2026-04-11 03:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:28.568006 | orchestrator | 2026-04-11 03:04:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:28.569015 | orchestrator | 2026-04-11 03:04:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:28.571249 | orchestrator | 2026-04-11 03:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:31.619715 | orchestrator | 2026-04-11 03:04:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:31.622541 | orchestrator | 2026-04-11 03:04:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:31.622676 | orchestrator | 2026-04-11 03:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:34.674322 | orchestrator | 2026-04-11 03:04:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:34.676159 | orchestrator | 2026-04-11 03:04:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:34.676208 | orchestrator | 2026-04-11 03:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:37.735459 | orchestrator | 2026-04-11 03:04:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:37.737373 | orchestrator | 2026-04-11 03:04:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:37.737468 | orchestrator | 2026-04-11 03:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:40.790370 | orchestrator | 2026-04-11 03:04:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:40.792843 | orchestrator | 2026-04-11 03:04:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:40.792922 | orchestrator | 2026-04-11 03:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:43.846228 | orchestrator | 2026-04-11 03:04:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:43.849134 | orchestrator | 2026-04-11 03:04:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:43.849212 | orchestrator | 2026-04-11 03:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:46.896790 | orchestrator | 2026-04-11 03:04:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:46.897895 | orchestrator | 2026-04-11 03:04:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:46.897922 | orchestrator | 2026-04-11 03:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:49.949260 | orchestrator | 2026-04-11 03:04:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:49.950126 | orchestrator | 2026-04-11 03:04:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:49.950159 | orchestrator | 2026-04-11 03:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:52.995930 | orchestrator | 2026-04-11 03:04:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:52.998496 | orchestrator | 2026-04-11 03:04:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:52.998692 | orchestrator | 2026-04-11 03:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:56.051013 | orchestrator | 2026-04-11 03:04:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:56.052796 | orchestrator | 2026-04-11 03:04:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:56.052882 | orchestrator | 2026-04-11 03:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:04:59.102302 | orchestrator | 2026-04-11 03:04:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:04:59.102956 | orchestrator | 2026-04-11 03:04:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:04:59.103015 | orchestrator | 2026-04-11 03:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:02.152950 | orchestrator | 2026-04-11 03:05:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:02.155808 | orchestrator | 2026-04-11 03:05:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:02.155841 | orchestrator | 2026-04-11 03:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:05.205750 | orchestrator | 2026-04-11 03:05:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:05.207359 | orchestrator | 2026-04-11 03:05:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:05.207493 | orchestrator | 2026-04-11 03:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:08.258703 | orchestrator | 2026-04-11 03:05:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:08.259896 | orchestrator | 2026-04-11 03:05:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:08.259978 | orchestrator | 2026-04-11 03:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:11.306519 | orchestrator | 2026-04-11 03:05:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:11.309012 | orchestrator | 2026-04-11 03:05:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:11.309046 | orchestrator | 2026-04-11 03:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:14.360932 | orchestrator | 2026-04-11 03:05:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:14.362720 | orchestrator | 2026-04-11 03:05:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:14.362773 | orchestrator | 2026-04-11 03:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:17.410219 | orchestrator | 2026-04-11 03:05:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:17.411940 | orchestrator | 2026-04-11 03:05:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:17.412051 | orchestrator | 2026-04-11 03:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:20.461190 | orchestrator | 2026-04-11 03:05:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:20.464365 | orchestrator | 2026-04-11 03:05:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:20.464464 | orchestrator | 2026-04-11 03:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:23.516017 | orchestrator | 2026-04-11 03:05:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:23.516863 | orchestrator | 2026-04-11 03:05:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:23.516968 | orchestrator | 2026-04-11 03:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:26.564240 | orchestrator | 2026-04-11 03:05:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:26.566163 | orchestrator | 2026-04-11 03:05:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:26.566237 | orchestrator | 2026-04-11 03:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:29.621754 | orchestrator | 2026-04-11 03:05:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:29.623529 | orchestrator | 2026-04-11 03:05:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:29.623608 | orchestrator | 2026-04-11 03:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:32.671286 | orchestrator | 2026-04-11 03:05:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:32.672029 | orchestrator | 2026-04-11 03:05:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:32.672050 | orchestrator | 2026-04-11 03:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:35.717060 | orchestrator | 2026-04-11 03:05:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:35.718715 | orchestrator | 2026-04-11 03:05:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:35.718773 | orchestrator | 2026-04-11 03:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:38.764185 | orchestrator | 2026-04-11 03:05:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:38.767115 | orchestrator | 2026-04-11 03:05:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:38.767166 | orchestrator | 2026-04-11 03:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:41.813805 | orchestrator | 2026-04-11 03:05:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:41.814112 | orchestrator | 2026-04-11 03:05:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:41.814151 | orchestrator | 2026-04-11 03:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:44.870171 | orchestrator | 2026-04-11 03:05:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:44.871210 | orchestrator | 2026-04-11 03:05:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:44.871303 | orchestrator | 2026-04-11 03:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:47.924702 | orchestrator | 2026-04-11 03:05:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:47.929122 | orchestrator | 2026-04-11 03:05:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:47.929201 | orchestrator | 2026-04-11 03:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:50.977359 | orchestrator | 2026-04-11 03:05:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:50.977988 | orchestrator | 2026-04-11 03:05:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:50.978059 | orchestrator | 2026-04-11 03:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:54.033905 | orchestrator | 2026-04-11 03:05:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:54.035265 | orchestrator | 2026-04-11 03:05:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:54.035335 | orchestrator | 2026-04-11 03:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:05:57.080364 | orchestrator | 2026-04-11 03:05:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:05:57.080689 | orchestrator | 2026-04-11 03:05:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:05:57.080720 | orchestrator | 2026-04-11 03:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:00.136927 | orchestrator | 2026-04-11 03:06:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:00.138311 | orchestrator | 2026-04-11 03:06:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:00.138375 | orchestrator | 2026-04-11 03:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:03.189286 | orchestrator | 2026-04-11 03:06:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:03.190210 | orchestrator | 2026-04-11 03:06:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:03.190260 | orchestrator | 2026-04-11 03:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:06.246135 | orchestrator | 2026-04-11 03:06:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:06.250369 | orchestrator | 2026-04-11 03:06:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:06.250443 | orchestrator | 2026-04-11 03:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:09.292030 | orchestrator | 2026-04-11 03:06:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:09.293424 | orchestrator | 2026-04-11 03:06:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:09.293482 | orchestrator | 2026-04-11 03:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:12.341825 | orchestrator | 2026-04-11 03:06:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:12.342906 | orchestrator | 2026-04-11 03:06:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:12.342938 | orchestrator | 2026-04-11 03:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:15.392069 | orchestrator | 2026-04-11 03:06:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:15.393341 | orchestrator | 2026-04-11 03:06:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:15.393414 | orchestrator | 2026-04-11 03:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:18.443172 | orchestrator | 2026-04-11 03:06:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:18.444659 | orchestrator | 2026-04-11 03:06:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:18.444779 | orchestrator | 2026-04-11 03:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:21.495137 | orchestrator | 2026-04-11 03:06:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:21.497107 | orchestrator | 2026-04-11 03:06:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:21.497169 | orchestrator | 2026-04-11 03:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:24.545997 | orchestrator | 2026-04-11 03:06:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:24.547465 | orchestrator | 2026-04-11 03:06:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:24.547574 | orchestrator | 2026-04-11 03:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:27.601207 | orchestrator | 2026-04-11 03:06:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:27.602375 | orchestrator | 2026-04-11 03:06:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:27.602422 | orchestrator | 2026-04-11 03:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:30.653224 | orchestrator | 2026-04-11 03:06:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:30.654468 | orchestrator | 2026-04-11 03:06:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:30.654567 | orchestrator | 2026-04-11 03:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:33.696505 | orchestrator | 2026-04-11 03:06:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:33.697669 | orchestrator | 2026-04-11 03:06:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:33.697718 | orchestrator | 2026-04-11 03:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:36.744333 | orchestrator | 2026-04-11 03:06:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:36.746878 | orchestrator | 2026-04-11 03:06:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:36.746968 | orchestrator | 2026-04-11 03:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:39.789085 | orchestrator | 2026-04-11 03:06:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:39.790443 | orchestrator | 2026-04-11 03:06:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:39.790623 | orchestrator | 2026-04-11 03:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:42.840000 | orchestrator | 2026-04-11 03:06:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:42.843480 | orchestrator | 2026-04-11 03:06:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:42.843566 | orchestrator | 2026-04-11 03:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:45.905315 | orchestrator | 2026-04-11 03:06:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:45.907201 | orchestrator | 2026-04-11 03:06:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:45.907437 | orchestrator | 2026-04-11 03:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:48.955434 | orchestrator | 2026-04-11 03:06:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:48.958293 | orchestrator | 2026-04-11 03:06:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:48.958340 | orchestrator | 2026-04-11 03:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:52.015332 | orchestrator | 2026-04-11 03:06:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:52.018916 | orchestrator | 2026-04-11 03:06:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:52.019053 | orchestrator | 2026-04-11 03:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:55.070372 | orchestrator | 2026-04-11 03:06:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:55.073098 | orchestrator | 2026-04-11 03:06:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:55.073205 | orchestrator | 2026-04-11 03:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:06:58.125998 | orchestrator | 2026-04-11 03:06:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:06:58.127791 | orchestrator | 2026-04-11 03:06:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:06:58.127862 | orchestrator | 2026-04-11 03:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:01.182172 | orchestrator | 2026-04-11 03:07:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:01.185387 | orchestrator | 2026-04-11 03:07:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:01.185454 | orchestrator | 2026-04-11 03:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:04.239345 | orchestrator | 2026-04-11 03:07:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:04.240730 | orchestrator | 2026-04-11 03:07:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:04.240823 | orchestrator | 2026-04-11 03:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:07.288575 | orchestrator | 2026-04-11 03:07:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:07.291055 | orchestrator | 2026-04-11 03:07:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:07.291176 | orchestrator | 2026-04-11 03:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:10.335231 | orchestrator | 2026-04-11 03:07:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:10.335508 | orchestrator | 2026-04-11 03:07:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:10.335533 | orchestrator | 2026-04-11 03:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:13.385121 | orchestrator | 2026-04-11 03:07:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:13.386834 | orchestrator | 2026-04-11 03:07:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:13.386925 | orchestrator | 2026-04-11 03:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:16.451480 | orchestrator | 2026-04-11 03:07:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:16.453468 | orchestrator | 2026-04-11 03:07:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:16.453552 | orchestrator | 2026-04-11 03:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:19.511019 | orchestrator | 2026-04-11 03:07:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:19.512470 | orchestrator | 2026-04-11 03:07:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:19.512492 | orchestrator | 2026-04-11 03:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:22.560886 | orchestrator | 2026-04-11 03:07:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:22.563524 | orchestrator | 2026-04-11 03:07:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:22.563595 | orchestrator | 2026-04-11 03:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:25.613007 | orchestrator | 2026-04-11 03:07:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:25.615711 | orchestrator | 2026-04-11 03:07:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:25.616223 | orchestrator | 2026-04-11 03:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:28.666796 | orchestrator | 2026-04-11 03:07:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:28.668581 | orchestrator | 2026-04-11 03:07:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:28.668740 | orchestrator | 2026-04-11 03:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:31.725823 | orchestrator | 2026-04-11 03:07:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:31.728690 | orchestrator | 2026-04-11 03:07:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:31.728743 | orchestrator | 2026-04-11 03:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:34.780350 | orchestrator | 2026-04-11 03:07:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:34.782921 | orchestrator | 2026-04-11 03:07:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:34.783330 | orchestrator | 2026-04-11 03:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:37.832675 | orchestrator | 2026-04-11 03:07:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:37.834505 | orchestrator | 2026-04-11 03:07:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:37.834538 | orchestrator | 2026-04-11 03:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:40.890074 | orchestrator | 2026-04-11 03:07:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:40.890436 | orchestrator | 2026-04-11 03:07:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:40.890468 | orchestrator | 2026-04-11 03:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:43.938201 | orchestrator | 2026-04-11 03:07:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:43.940020 | orchestrator | 2026-04-11 03:07:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:43.940081 | orchestrator | 2026-04-11 03:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:46.985914 | orchestrator | 2026-04-11 03:07:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:46.987393 | orchestrator | 2026-04-11 03:07:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:46.987450 | orchestrator | 2026-04-11 03:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:50.037345 | orchestrator | 2026-04-11 03:07:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:50.039732 | orchestrator | 2026-04-11 03:07:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:50.039806 | orchestrator | 2026-04-11 03:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:53.081760 | orchestrator | 2026-04-11 03:07:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:53.083549 | orchestrator | 2026-04-11 03:07:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:53.083615 | orchestrator | 2026-04-11 03:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:56.130882 | orchestrator | 2026-04-11 03:07:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:56.132197 | orchestrator | 2026-04-11 03:07:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:56.132247 | orchestrator | 2026-04-11 03:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:07:59.180392 | orchestrator | 2026-04-11 03:07:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:07:59.181260 | orchestrator | 2026-04-11 03:07:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:07:59.181354 | orchestrator | 2026-04-11 03:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:02.234298 | orchestrator | 2026-04-11 03:08:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:02.236392 | orchestrator | 2026-04-11 03:08:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:02.236462 | orchestrator | 2026-04-11 03:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:05.285908 | orchestrator | 2026-04-11 03:08:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:05.288072 | orchestrator | 2026-04-11 03:08:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:05.288108 | orchestrator | 2026-04-11 03:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:08.339260 | orchestrator | 2026-04-11 03:08:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:08.341825 | orchestrator | 2026-04-11 03:08:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:08.341868 | orchestrator | 2026-04-11 03:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:11.383399 | orchestrator | 2026-04-11 03:08:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:11.384705 | orchestrator | 2026-04-11 03:08:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:11.384758 | orchestrator | 2026-04-11 03:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:14.430099 | orchestrator | 2026-04-11 03:08:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:14.430935 | orchestrator | 2026-04-11 03:08:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:14.430968 | orchestrator | 2026-04-11 03:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:17.484048 | orchestrator | 2026-04-11 03:08:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:17.486053 | orchestrator | 2026-04-11 03:08:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:17.486110 | orchestrator | 2026-04-11 03:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:20.530286 | orchestrator | 2026-04-11 03:08:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:20.531009 | orchestrator | 2026-04-11 03:08:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:20.531037 | orchestrator | 2026-04-11 03:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:23.565388 | orchestrator | 2026-04-11 03:08:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:23.567347 | orchestrator | 2026-04-11 03:08:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:23.567421 | orchestrator | 2026-04-11 03:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:26.612934 | orchestrator | 2026-04-11 03:08:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:26.613955 | orchestrator | 2026-04-11 03:08:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:26.614164 | orchestrator | 2026-04-11 03:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:29.660924 | orchestrator | 2026-04-11 03:08:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:29.663398 | orchestrator | 2026-04-11 03:08:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:29.663491 | orchestrator | 2026-04-11 03:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:32.707133 | orchestrator | 2026-04-11 03:08:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:32.708828 | orchestrator | 2026-04-11 03:08:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:32.708893 | orchestrator | 2026-04-11 03:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:35.748089 | orchestrator | 2026-04-11 03:08:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:35.749605 | orchestrator | 2026-04-11 03:08:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:35.749716 | orchestrator | 2026-04-11 03:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:38.797183 | orchestrator | 2026-04-11 03:08:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:38.798908 | orchestrator | 2026-04-11 03:08:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:38.798957 | orchestrator | 2026-04-11 03:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:41.850460 | orchestrator | 2026-04-11 03:08:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:41.851575 | orchestrator | 2026-04-11 03:08:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:41.851621 | orchestrator | 2026-04-11 03:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:44.896804 | orchestrator | 2026-04-11 03:08:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:44.898533 | orchestrator | 2026-04-11 03:08:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:44.898655 | orchestrator | 2026-04-11 03:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:47.944910 | orchestrator | 2026-04-11 03:08:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:47.947223 | orchestrator | 2026-04-11 03:08:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:47.947304 | orchestrator | 2026-04-11 03:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:50.993343 | orchestrator | 2026-04-11 03:08:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:50.994368 | orchestrator | 2026-04-11 03:08:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:50.994477 | orchestrator | 2026-04-11 03:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:54.042490 | orchestrator | 2026-04-11 03:08:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:54.044483 | orchestrator | 2026-04-11 03:08:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:54.044544 | orchestrator | 2026-04-11 03:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:08:57.085527 | orchestrator | 2026-04-11 03:08:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:08:57.086219 | orchestrator | 2026-04-11 03:08:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:08:57.086254 | orchestrator | 2026-04-11 03:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:00.122960 | orchestrator | 2026-04-11 03:09:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:00.124425 | orchestrator | 2026-04-11 03:09:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:00.124489 | orchestrator | 2026-04-11 03:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:03.171165 | orchestrator | 2026-04-11 03:09:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:03.173343 | orchestrator | 2026-04-11 03:09:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:03.173394 | orchestrator | 2026-04-11 03:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:06.223294 | orchestrator | 2026-04-11 03:09:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:06.224129 | orchestrator | 2026-04-11 03:09:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:06.224179 | orchestrator | 2026-04-11 03:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:09.270764 | orchestrator | 2026-04-11 03:09:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:09.272316 | orchestrator | 2026-04-11 03:09:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:09.272367 | orchestrator | 2026-04-11 03:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:12.323073 | orchestrator | 2026-04-11 03:09:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:12.325134 | orchestrator | 2026-04-11 03:09:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:12.325194 | orchestrator | 2026-04-11 03:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:15.375297 | orchestrator | 2026-04-11 03:09:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:15.376891 | orchestrator | 2026-04-11 03:09:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:15.376952 | orchestrator | 2026-04-11 03:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:18.428040 | orchestrator | 2026-04-11 03:09:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:18.429879 | orchestrator | 2026-04-11 03:09:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:18.429942 | orchestrator | 2026-04-11 03:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:21.473996 | orchestrator | 2026-04-11 03:09:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:21.475606 | orchestrator | 2026-04-11 03:09:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:21.475661 | orchestrator | 2026-04-11 03:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:24.519265 | orchestrator | 2026-04-11 03:09:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:24.521269 | orchestrator | 2026-04-11 03:09:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:24.521338 | orchestrator | 2026-04-11 03:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:27.564332 | orchestrator | 2026-04-11 03:09:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:27.566100 | orchestrator | 2026-04-11 03:09:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:27.566170 | orchestrator | 2026-04-11 03:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:30.607496 | orchestrator | 2026-04-11 03:09:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:30.609238 | orchestrator | 2026-04-11 03:09:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:30.609314 | orchestrator | 2026-04-11 03:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:33.643263 | orchestrator | 2026-04-11 03:09:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:33.643754 | orchestrator | 2026-04-11 03:09:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:33.643798 | orchestrator | 2026-04-11 03:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:36.688797 | orchestrator | 2026-04-11 03:09:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:36.689620 | orchestrator | 2026-04-11 03:09:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:36.689848 | orchestrator | 2026-04-11 03:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:39.732154 | orchestrator | 2026-04-11 03:09:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:39.733480 | orchestrator | 2026-04-11 03:09:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:39.733572 | orchestrator | 2026-04-11 03:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:42.780949 | orchestrator | 2026-04-11 03:09:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:42.782306 | orchestrator | 2026-04-11 03:09:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:42.782397 | orchestrator | 2026-04-11 03:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:45.826205 | orchestrator | 2026-04-11 03:09:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:45.828093 | orchestrator | 2026-04-11 03:09:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:45.828251 | orchestrator | 2026-04-11 03:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:48.870625 | orchestrator | 2026-04-11 03:09:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:48.871709 | orchestrator | 2026-04-11 03:09:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:48.871831 | orchestrator | 2026-04-11 03:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:51.917999 | orchestrator | 2026-04-11 03:09:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:51.921476 | orchestrator | 2026-04-11 03:09:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:51.921548 | orchestrator | 2026-04-11 03:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:54.966880 | orchestrator | 2026-04-11 03:09:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:54.967876 | orchestrator | 2026-04-11 03:09:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:54.967958 | orchestrator | 2026-04-11 03:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:09:58.017172 | orchestrator | 2026-04-11 03:09:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:09:58.017890 | orchestrator | 2026-04-11 03:09:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:09:58.018008 | orchestrator | 2026-04-11 03:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:01.063312 | orchestrator | 2026-04-11 03:10:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:01.064153 | orchestrator | 2026-04-11 03:10:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:01.064190 | orchestrator | 2026-04-11 03:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:04.111999 | orchestrator | 2026-04-11 03:10:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:04.113578 | orchestrator | 2026-04-11 03:10:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:04.113665 | orchestrator | 2026-04-11 03:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:07.167025 | orchestrator | 2026-04-11 03:10:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:07.169158 | orchestrator | 2026-04-11 03:10:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:07.169206 | orchestrator | 2026-04-11 03:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:10.218840 | orchestrator | 2026-04-11 03:10:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:10.221147 | orchestrator | 2026-04-11 03:10:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:10.221216 | orchestrator | 2026-04-11 03:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:13.269044 | orchestrator | 2026-04-11 03:10:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:13.270793 | orchestrator | 2026-04-11 03:10:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:13.270880 | orchestrator | 2026-04-11 03:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:16.323447 | orchestrator | 2026-04-11 03:10:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:16.325075 | orchestrator | 2026-04-11 03:10:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:16.325336 | orchestrator | 2026-04-11 03:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:19.379162 | orchestrator | 2026-04-11 03:10:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:19.381217 | orchestrator | 2026-04-11 03:10:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:19.381298 | orchestrator | 2026-04-11 03:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:22.427356 | orchestrator | 2026-04-11 03:10:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:22.429109 | orchestrator | 2026-04-11 03:10:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:22.429157 | orchestrator | 2026-04-11 03:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:25.474821 | orchestrator | 2026-04-11 03:10:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:25.477315 | orchestrator | 2026-04-11 03:10:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:25.477379 | orchestrator | 2026-04-11 03:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:28.525516 | orchestrator | 2026-04-11 03:10:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:28.527811 | orchestrator | 2026-04-11 03:10:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:28.527911 | orchestrator | 2026-04-11 03:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:31.575460 | orchestrator | 2026-04-11 03:10:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:31.578263 | orchestrator | 2026-04-11 03:10:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:31.578334 | orchestrator | 2026-04-11 03:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:34.633388 | orchestrator | 2026-04-11 03:10:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:34.636609 | orchestrator | 2026-04-11 03:10:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:34.636724 | orchestrator | 2026-04-11 03:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:37.683456 | orchestrator | 2026-04-11 03:10:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:37.684569 | orchestrator | 2026-04-11 03:10:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:37.684600 | orchestrator | 2026-04-11 03:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:40.737023 | orchestrator | 2026-04-11 03:10:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:40.739268 | orchestrator | 2026-04-11 03:10:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:40.739324 | orchestrator | 2026-04-11 03:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:43.795204 | orchestrator | 2026-04-11 03:10:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:43.797983 | orchestrator | 2026-04-11 03:10:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:43.798110 | orchestrator | 2026-04-11 03:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:46.850650 | orchestrator | 2026-04-11 03:10:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:46.852831 | orchestrator | 2026-04-11 03:10:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:46.852881 | orchestrator | 2026-04-11 03:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:49.901223 | orchestrator | 2026-04-11 03:10:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:49.902180 | orchestrator | 2026-04-11 03:10:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:49.902463 | orchestrator | 2026-04-11 03:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:52.958573 | orchestrator | 2026-04-11 03:10:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:52.959643 | orchestrator | 2026-04-11 03:10:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:52.959680 | orchestrator | 2026-04-11 03:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:56.008705 | orchestrator | 2026-04-11 03:10:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:56.008971 | orchestrator | 2026-04-11 03:10:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:56.009112 | orchestrator | 2026-04-11 03:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:10:59.055572 | orchestrator | 2026-04-11 03:10:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:10:59.057319 | orchestrator | 2026-04-11 03:10:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:10:59.057379 | orchestrator | 2026-04-11 03:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:02.104894 | orchestrator | 2026-04-11 03:11:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:02.106975 | orchestrator | 2026-04-11 03:11:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:02.107062 | orchestrator | 2026-04-11 03:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:05.161179 | orchestrator | 2026-04-11 03:11:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:05.162872 | orchestrator | 2026-04-11 03:11:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:05.162911 | orchestrator | 2026-04-11 03:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:08.217436 | orchestrator | 2026-04-11 03:11:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:08.218625 | orchestrator | 2026-04-11 03:11:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:08.218707 | orchestrator | 2026-04-11 03:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:11.266264 | orchestrator | 2026-04-11 03:11:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:11.267932 | orchestrator | 2026-04-11 03:11:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:11.267983 | orchestrator | 2026-04-11 03:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:14.316076 | orchestrator | 2026-04-11 03:11:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:14.317083 | orchestrator | 2026-04-11 03:11:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:14.317138 | orchestrator | 2026-04-11 03:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:17.367594 | orchestrator | 2026-04-11 03:11:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:17.370361 | orchestrator | 2026-04-11 03:11:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:17.370421 | orchestrator | 2026-04-11 03:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:20.407261 | orchestrator | 2026-04-11 03:11:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:20.410089 | orchestrator | 2026-04-11 03:11:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:20.410164 | orchestrator | 2026-04-11 03:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:23.454975 | orchestrator | 2026-04-11 03:11:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:23.455545 | orchestrator | 2026-04-11 03:11:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:23.455570 | orchestrator | 2026-04-11 03:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:26.507278 | orchestrator | 2026-04-11 03:11:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:26.510127 | orchestrator | 2026-04-11 03:11:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:26.510201 | orchestrator | 2026-04-11 03:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:29.556858 | orchestrator | 2026-04-11 03:11:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:29.558263 | orchestrator | 2026-04-11 03:11:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:29.558342 | orchestrator | 2026-04-11 03:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:32.593844 | orchestrator | 2026-04-11 03:11:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:32.594727 | orchestrator | 2026-04-11 03:11:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:32.594830 | orchestrator | 2026-04-11 03:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:35.644902 | orchestrator | 2026-04-11 03:11:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:35.646900 | orchestrator | 2026-04-11 03:11:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:35.646949 | orchestrator | 2026-04-11 03:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:38.704043 | orchestrator | 2026-04-11 03:11:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:38.708351 | orchestrator | 2026-04-11 03:11:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:38.708470 | orchestrator | 2026-04-11 03:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:41.759624 | orchestrator | 2026-04-11 03:11:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:41.762177 | orchestrator | 2026-04-11 03:11:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:41.762225 | orchestrator | 2026-04-11 03:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:44.814491 | orchestrator | 2026-04-11 03:11:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:44.816420 | orchestrator | 2026-04-11 03:11:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:44.816567 | orchestrator | 2026-04-11 03:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:47.864678 | orchestrator | 2026-04-11 03:11:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:47.866675 | orchestrator | 2026-04-11 03:11:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:47.866835 | orchestrator | 2026-04-11 03:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:50.914088 | orchestrator | 2026-04-11 03:11:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:50.917286 | orchestrator | 2026-04-11 03:11:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:50.917402 | orchestrator | 2026-04-11 03:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:53.955867 | orchestrator | 2026-04-11 03:11:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:53.956979 | orchestrator | 2026-04-11 03:11:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:53.957060 | orchestrator | 2026-04-11 03:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:11:57.004514 | orchestrator | 2026-04-11 03:11:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:11:57.007388 | orchestrator | 2026-04-11 03:11:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:11:57.007467 | orchestrator | 2026-04-11 03:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:00.044369 | orchestrator | 2026-04-11 03:12:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:00.045136 | orchestrator | 2026-04-11 03:12:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:00.045173 | orchestrator | 2026-04-11 03:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:03.101703 | orchestrator | 2026-04-11 03:12:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:03.102900 | orchestrator | 2026-04-11 03:12:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:03.102927 | orchestrator | 2026-04-11 03:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:06.145202 | orchestrator | 2026-04-11 03:12:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:06.145588 | orchestrator | 2026-04-11 03:12:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:06.145639 | orchestrator | 2026-04-11 03:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:09.194449 | orchestrator | 2026-04-11 03:12:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:09.195216 | orchestrator | 2026-04-11 03:12:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:09.195431 | orchestrator | 2026-04-11 03:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:12.241378 | orchestrator | 2026-04-11 03:12:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:12.243877 | orchestrator | 2026-04-11 03:12:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:12.243959 | orchestrator | 2026-04-11 03:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:15.299041 | orchestrator | 2026-04-11 03:12:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:15.301277 | orchestrator | 2026-04-11 03:12:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:15.301350 | orchestrator | 2026-04-11 03:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:18.350678 | orchestrator | 2026-04-11 03:12:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:18.353132 | orchestrator | 2026-04-11 03:12:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:18.353234 | orchestrator | 2026-04-11 03:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:21.403587 | orchestrator | 2026-04-11 03:12:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:21.405765 | orchestrator | 2026-04-11 03:12:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:21.405876 | orchestrator | 2026-04-11 03:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:24.457627 | orchestrator | 2026-04-11 03:12:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:24.459234 | orchestrator | 2026-04-11 03:12:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:24.459275 | orchestrator | 2026-04-11 03:12:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:27.505197 | orchestrator | 2026-04-11 03:12:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:27.507554 | orchestrator | 2026-04-11 03:12:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:27.507645 | orchestrator | 2026-04-11 03:12:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:30.555441 | orchestrator | 2026-04-11 03:12:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:30.556713 | orchestrator | 2026-04-11 03:12:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:30.556766 | orchestrator | 2026-04-11 03:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:33.598087 | orchestrator | 2026-04-11 03:12:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:33.600035 | orchestrator | 2026-04-11 03:12:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:33.601003 | orchestrator | 2026-04-11 03:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:36.653633 | orchestrator | 2026-04-11 03:12:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:36.655594 | orchestrator | 2026-04-11 03:12:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:36.655646 | orchestrator | 2026-04-11 03:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:39.713340 | orchestrator | 2026-04-11 03:12:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:39.715149 | orchestrator | 2026-04-11 03:12:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:39.715211 | orchestrator | 2026-04-11 03:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:42.760199 | orchestrator | 2026-04-11 03:12:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:42.761435 | orchestrator | 2026-04-11 03:12:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:42.761490 | orchestrator | 2026-04-11 03:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:45.814311 | orchestrator | 2026-04-11 03:12:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:45.817091 | orchestrator | 2026-04-11 03:12:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:45.817255 | orchestrator | 2026-04-11 03:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:48.865542 | orchestrator | 2026-04-11 03:12:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:48.866640 | orchestrator | 2026-04-11 03:12:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:48.866746 | orchestrator | 2026-04-11 03:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:51.917706 | orchestrator | 2026-04-11 03:12:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:51.920096 | orchestrator | 2026-04-11 03:12:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:51.920192 | orchestrator | 2026-04-11 03:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:54.967706 | orchestrator | 2026-04-11 03:12:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:54.969025 | orchestrator | 2026-04-11 03:12:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:54.969064 | orchestrator | 2026-04-11 03:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:12:58.021230 | orchestrator | 2026-04-11 03:12:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:12:58.023413 | orchestrator | 2026-04-11 03:12:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:12:58.023673 | orchestrator | 2026-04-11 03:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:01.065389 | orchestrator | 2026-04-11 03:13:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:01.066881 | orchestrator | 2026-04-11 03:13:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:01.067088 | orchestrator | 2026-04-11 03:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:04.108793 | orchestrator | 2026-04-11 03:13:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:04.110973 | orchestrator | 2026-04-11 03:13:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:04.111030 | orchestrator | 2026-04-11 03:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:07.161269 | orchestrator | 2026-04-11 03:13:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:07.162432 | orchestrator | 2026-04-11 03:13:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:07.162661 | orchestrator | 2026-04-11 03:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:10.213742 | orchestrator | 2026-04-11 03:13:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:10.214709 | orchestrator | 2026-04-11 03:13:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:10.215301 | orchestrator | 2026-04-11 03:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:13.266425 | orchestrator | 2026-04-11 03:13:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:13.267903 | orchestrator | 2026-04-11 03:13:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:13.268000 | orchestrator | 2026-04-11 03:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:16.313459 | orchestrator | 2026-04-11 03:13:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:16.314400 | orchestrator | 2026-04-11 03:13:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:16.314513 | orchestrator | 2026-04-11 03:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:19.361374 | orchestrator | 2026-04-11 03:13:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:19.362302 | orchestrator | 2026-04-11 03:13:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:19.362326 | orchestrator | 2026-04-11 03:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:22.410217 | orchestrator | 2026-04-11 03:13:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:22.412362 | orchestrator | 2026-04-11 03:13:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:22.412422 | orchestrator | 2026-04-11 03:13:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:25.467040 | orchestrator | 2026-04-11 03:13:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:25.470461 | orchestrator | 2026-04-11 03:13:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:25.470667 | orchestrator | 2026-04-11 03:13:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:28.524585 | orchestrator | 2026-04-11 03:13:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:28.526960 | orchestrator | 2026-04-11 03:13:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:28.527137 | orchestrator | 2026-04-11 03:13:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:31.575018 | orchestrator | 2026-04-11 03:13:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:31.576803 | orchestrator | 2026-04-11 03:13:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:31.577079 | orchestrator | 2026-04-11 03:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:34.624224 | orchestrator | 2026-04-11 03:13:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:34.625835 | orchestrator | 2026-04-11 03:13:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:34.625989 | orchestrator | 2026-04-11 03:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:37.676401 | orchestrator | 2026-04-11 03:13:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:37.678519 | orchestrator | 2026-04-11 03:13:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:37.678604 | orchestrator | 2026-04-11 03:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:40.731439 | orchestrator | 2026-04-11 03:13:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:40.733245 | orchestrator | 2026-04-11 03:13:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:40.733386 | orchestrator | 2026-04-11 03:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:43.777201 | orchestrator | 2026-04-11 03:13:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:43.777565 | orchestrator | 2026-04-11 03:13:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:43.777643 | orchestrator | 2026-04-11 03:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:46.825225 | orchestrator | 2026-04-11 03:13:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:46.828521 | orchestrator | 2026-04-11 03:13:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:46.828582 | orchestrator | 2026-04-11 03:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:49.883987 | orchestrator | 2026-04-11 03:13:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:49.885399 | orchestrator | 2026-04-11 03:13:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:49.885459 | orchestrator | 2026-04-11 03:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:52.936672 | orchestrator | 2026-04-11 03:13:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:52.939236 | orchestrator | 2026-04-11 03:13:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:52.939308 | orchestrator | 2026-04-11 03:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:55.990868 | orchestrator | 2026-04-11 03:13:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:55.997283 | orchestrator | 2026-04-11 03:13:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:55.997369 | orchestrator | 2026-04-11 03:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:13:59.043136 | orchestrator | 2026-04-11 03:13:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:13:59.044994 | orchestrator | 2026-04-11 03:13:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:13:59.045066 | orchestrator | 2026-04-11 03:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:02.094418 | orchestrator | 2026-04-11 03:14:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:02.097601 | orchestrator | 2026-04-11 03:14:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:02.097636 | orchestrator | 2026-04-11 03:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:05.148246 | orchestrator | 2026-04-11 03:14:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:05.150214 | orchestrator | 2026-04-11 03:14:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:05.150288 | orchestrator | 2026-04-11 03:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:08.205864 | orchestrator | 2026-04-11 03:14:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:08.208308 | orchestrator | 2026-04-11 03:14:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:08.208385 | orchestrator | 2026-04-11 03:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:11.265397 | orchestrator | 2026-04-11 03:14:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:11.266646 | orchestrator | 2026-04-11 03:14:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:11.266737 | orchestrator | 2026-04-11 03:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:14.322295 | orchestrator | 2026-04-11 03:14:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:14.323465 | orchestrator | 2026-04-11 03:14:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:14.323635 | orchestrator | 2026-04-11 03:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:17.380383 | orchestrator | 2026-04-11 03:14:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:17.414193 | orchestrator | 2026-04-11 03:14:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:17.414297 | orchestrator | 2026-04-11 03:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:20.434341 | orchestrator | 2026-04-11 03:14:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:20.436785 | orchestrator | 2026-04-11 03:14:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:20.436844 | orchestrator | 2026-04-11 03:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:23.494128 | orchestrator | 2026-04-11 03:14:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:23.496040 | orchestrator | 2026-04-11 03:14:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:23.496105 | orchestrator | 2026-04-11 03:14:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:26.552570 | orchestrator | 2026-04-11 03:14:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:26.553922 | orchestrator | 2026-04-11 03:14:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:26.553974 | orchestrator | 2026-04-11 03:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:29.604282 | orchestrator | 2026-04-11 03:14:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:29.605950 | orchestrator | 2026-04-11 03:14:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:29.606097 | orchestrator | 2026-04-11 03:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:32.652406 | orchestrator | 2026-04-11 03:14:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:32.654465 | orchestrator | 2026-04-11 03:14:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:32.654540 | orchestrator | 2026-04-11 03:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:35.705682 | orchestrator | 2026-04-11 03:14:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:35.707363 | orchestrator | 2026-04-11 03:14:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:35.707406 | orchestrator | 2026-04-11 03:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:38.763138 | orchestrator | 2026-04-11 03:14:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:38.764780 | orchestrator | 2026-04-11 03:14:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:38.764983 | orchestrator | 2026-04-11 03:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:41.825342 | orchestrator | 2026-04-11 03:14:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:41.827253 | orchestrator | 2026-04-11 03:14:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:41.827294 | orchestrator | 2026-04-11 03:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:44.879587 | orchestrator | 2026-04-11 03:14:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:44.881230 | orchestrator | 2026-04-11 03:14:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:44.881308 | orchestrator | 2026-04-11 03:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:47.934331 | orchestrator | 2026-04-11 03:14:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:47.937270 | orchestrator | 2026-04-11 03:14:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:47.937327 | orchestrator | 2026-04-11 03:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:50.979746 | orchestrator | 2026-04-11 03:14:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:50.980810 | orchestrator | 2026-04-11 03:14:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:50.980843 | orchestrator | 2026-04-11 03:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:54.024018 | orchestrator | 2026-04-11 03:14:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:54.024892 | orchestrator | 2026-04-11 03:14:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:54.024921 | orchestrator | 2026-04-11 03:14:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:14:57.073127 | orchestrator | 2026-04-11 03:14:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:14:57.075612 | orchestrator | 2026-04-11 03:14:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:14:57.075701 | orchestrator | 2026-04-11 03:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:00.121426 | orchestrator | 2026-04-11 03:15:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:00.124003 | orchestrator | 2026-04-11 03:15:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:00.124101 | orchestrator | 2026-04-11 03:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:03.170611 | orchestrator | 2026-04-11 03:15:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:03.173737 | orchestrator | 2026-04-11 03:15:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:03.173809 | orchestrator | 2026-04-11 03:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:06.220032 | orchestrator | 2026-04-11 03:15:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:06.221846 | orchestrator | 2026-04-11 03:15:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:06.221909 | orchestrator | 2026-04-11 03:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:09.268929 | orchestrator | 2026-04-11 03:15:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:09.271520 | orchestrator | 2026-04-11 03:15:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:09.271680 | orchestrator | 2026-04-11 03:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:12.324722 | orchestrator | 2026-04-11 03:15:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:12.327608 | orchestrator | 2026-04-11 03:15:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:12.327674 | orchestrator | 2026-04-11 03:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:15.375465 | orchestrator | 2026-04-11 03:15:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:15.378162 | orchestrator | 2026-04-11 03:15:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:15.378227 | orchestrator | 2026-04-11 03:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:18.430282 | orchestrator | 2026-04-11 03:15:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:18.431833 | orchestrator | 2026-04-11 03:15:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:18.431880 | orchestrator | 2026-04-11 03:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:21.483932 | orchestrator | 2026-04-11 03:15:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:21.486716 | orchestrator | 2026-04-11 03:15:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:21.486809 | orchestrator | 2026-04-11 03:15:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:24.536605 | orchestrator | 2026-04-11 03:15:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:24.539273 | orchestrator | 2026-04-11 03:15:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:24.539404 | orchestrator | 2026-04-11 03:15:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:27.587318 | orchestrator | 2026-04-11 03:15:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:27.589252 | orchestrator | 2026-04-11 03:15:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:27.589388 | orchestrator | 2026-04-11 03:15:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:30.634170 | orchestrator | 2026-04-11 03:15:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:30.635766 | orchestrator | 2026-04-11 03:15:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:30.635820 | orchestrator | 2026-04-11 03:15:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:33.679532 | orchestrator | 2026-04-11 03:15:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:33.681802 | orchestrator | 2026-04-11 03:15:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:33.681911 | orchestrator | 2026-04-11 03:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:36.728160 | orchestrator | 2026-04-11 03:15:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:36.730496 | orchestrator | 2026-04-11 03:15:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:36.730595 | orchestrator | 2026-04-11 03:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:39.777283 | orchestrator | 2026-04-11 03:15:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:39.780451 | orchestrator | 2026-04-11 03:15:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:39.780516 | orchestrator | 2026-04-11 03:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:42.831790 | orchestrator | 2026-04-11 03:15:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:42.833441 | orchestrator | 2026-04-11 03:15:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:42.833514 | orchestrator | 2026-04-11 03:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:45.877472 | orchestrator | 2026-04-11 03:15:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:45.879148 | orchestrator | 2026-04-11 03:15:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:45.879210 | orchestrator | 2026-04-11 03:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:48.933259 | orchestrator | 2026-04-11 03:15:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:48.934531 | orchestrator | 2026-04-11 03:15:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:48.934833 | orchestrator | 2026-04-11 03:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:51.984673 | orchestrator | 2026-04-11 03:15:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:51.987846 | orchestrator | 2026-04-11 03:15:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:51.987908 | orchestrator | 2026-04-11 03:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:55.042271 | orchestrator | 2026-04-11 03:15:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:55.044306 | orchestrator | 2026-04-11 03:15:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:55.044363 | orchestrator | 2026-04-11 03:15:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:15:58.103008 | orchestrator | 2026-04-11 03:15:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:15:58.108474 | orchestrator | 2026-04-11 03:15:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:15:58.108534 | orchestrator | 2026-04-11 03:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:01.156786 | orchestrator | 2026-04-11 03:16:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:01.158230 | orchestrator | 2026-04-11 03:16:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:01.158368 | orchestrator | 2026-04-11 03:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:04.209253 | orchestrator | 2026-04-11 03:16:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:04.210211 | orchestrator | 2026-04-11 03:16:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:04.210274 | orchestrator | 2026-04-11 03:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:07.261438 | orchestrator | 2026-04-11 03:16:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:07.262860 | orchestrator | 2026-04-11 03:16:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:07.262945 | orchestrator | 2026-04-11 03:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:10.320440 | orchestrator | 2026-04-11 03:16:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:10.322388 | orchestrator | 2026-04-11 03:16:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:10.322443 | orchestrator | 2026-04-11 03:16:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:13.380122 | orchestrator | 2026-04-11 03:16:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:13.382269 | orchestrator | 2026-04-11 03:16:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:13.382461 | orchestrator | 2026-04-11 03:16:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:16.431831 | orchestrator | 2026-04-11 03:16:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:16.433013 | orchestrator | 2026-04-11 03:16:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:16.433061 | orchestrator | 2026-04-11 03:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:19.477187 | orchestrator | 2026-04-11 03:16:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:19.477530 | orchestrator | 2026-04-11 03:16:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:19.477789 | orchestrator | 2026-04-11 03:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:22.525681 | orchestrator | 2026-04-11 03:16:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:22.527249 | orchestrator | 2026-04-11 03:16:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:22.527329 | orchestrator | 2026-04-11 03:16:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:25.575688 | orchestrator | 2026-04-11 03:16:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:25.577197 | orchestrator | 2026-04-11 03:16:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:25.577306 | orchestrator | 2026-04-11 03:16:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:28.626658 | orchestrator | 2026-04-11 03:16:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:28.634560 | orchestrator | 2026-04-11 03:16:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:28.634664 | orchestrator | 2026-04-11 03:16:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:31.680797 | orchestrator | 2026-04-11 03:16:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:31.682440 | orchestrator | 2026-04-11 03:16:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:31.682483 | orchestrator | 2026-04-11 03:16:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:34.735773 | orchestrator | 2026-04-11 03:16:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:34.737483 | orchestrator | 2026-04-11 03:16:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:34.737564 | orchestrator | 2026-04-11 03:16:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:37.791136 | orchestrator | 2026-04-11 03:16:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:37.793376 | orchestrator | 2026-04-11 03:16:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:37.793452 | orchestrator | 2026-04-11 03:16:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:40.844224 | orchestrator | 2026-04-11 03:16:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:40.845377 | orchestrator | 2026-04-11 03:16:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:40.845403 | orchestrator | 2026-04-11 03:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:43.897175 | orchestrator | 2026-04-11 03:16:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:43.898393 | orchestrator | 2026-04-11 03:16:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:43.898440 | orchestrator | 2026-04-11 03:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:46.945964 | orchestrator | 2026-04-11 03:16:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:46.947136 | orchestrator | 2026-04-11 03:16:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:46.947173 | orchestrator | 2026-04-11 03:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:50.002297 | orchestrator | 2026-04-11 03:16:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:50.004580 | orchestrator | 2026-04-11 03:16:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:50.004675 | orchestrator | 2026-04-11 03:16:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:53.063485 | orchestrator | 2026-04-11 03:16:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:53.065683 | orchestrator | 2026-04-11 03:16:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:53.065992 | orchestrator | 2026-04-11 03:16:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:56.110629 | orchestrator | 2026-04-11 03:16:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:56.112382 | orchestrator | 2026-04-11 03:16:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:56.112445 | orchestrator | 2026-04-11 03:16:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:16:59.165929 | orchestrator | 2026-04-11 03:16:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:16:59.168649 | orchestrator | 2026-04-11 03:16:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:16:59.168715 | orchestrator | 2026-04-11 03:16:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:02.221578 | orchestrator | 2026-04-11 03:17:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:02.223769 | orchestrator | 2026-04-11 03:17:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:02.223810 | orchestrator | 2026-04-11 03:17:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:05.273893 | orchestrator | 2026-04-11 03:17:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:05.277167 | orchestrator | 2026-04-11 03:17:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:05.277250 | orchestrator | 2026-04-11 03:17:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:08.321493 | orchestrator | 2026-04-11 03:17:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:08.322160 | orchestrator | 2026-04-11 03:17:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:08.322216 | orchestrator | 2026-04-11 03:17:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:11.378706 | orchestrator | 2026-04-11 03:17:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:11.380275 | orchestrator | 2026-04-11 03:17:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:11.380327 | orchestrator | 2026-04-11 03:17:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:14.426244 | orchestrator | 2026-04-11 03:17:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:14.427080 | orchestrator | 2026-04-11 03:17:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:14.427127 | orchestrator | 2026-04-11 03:17:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:17.492520 | orchestrator | 2026-04-11 03:17:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:17.494467 | orchestrator | 2026-04-11 03:17:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:17.494620 | orchestrator | 2026-04-11 03:17:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:20.543644 | orchestrator | 2026-04-11 03:17:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:20.545204 | orchestrator | 2026-04-11 03:17:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:20.545296 | orchestrator | 2026-04-11 03:17:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:23.591607 | orchestrator | 2026-04-11 03:17:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:23.591784 | orchestrator | 2026-04-11 03:17:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:23.591931 | orchestrator | 2026-04-11 03:17:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:26.641227 | orchestrator | 2026-04-11 03:17:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:26.642559 | orchestrator | 2026-04-11 03:17:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:26.642631 | orchestrator | 2026-04-11 03:17:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:29.680592 | orchestrator | 2026-04-11 03:17:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:29.681606 | orchestrator | 2026-04-11 03:17:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:29.681762 | orchestrator | 2026-04-11 03:17:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:32.735305 | orchestrator | 2026-04-11 03:17:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:32.736968 | orchestrator | 2026-04-11 03:17:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:32.737012 | orchestrator | 2026-04-11 03:17:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:35.779775 | orchestrator | 2026-04-11 03:17:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:35.781202 | orchestrator | 2026-04-11 03:17:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:35.781348 | orchestrator | 2026-04-11 03:17:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:38.827509 | orchestrator | 2026-04-11 03:17:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:38.830344 | orchestrator | 2026-04-11 03:17:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:38.830478 | orchestrator | 2026-04-11 03:17:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:41.873344 | orchestrator | 2026-04-11 03:17:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:41.873609 | orchestrator | 2026-04-11 03:17:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:41.873638 | orchestrator | 2026-04-11 03:17:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:44.915189 | orchestrator | 2026-04-11 03:17:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:44.919003 | orchestrator | 2026-04-11 03:17:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:44.919058 | orchestrator | 2026-04-11 03:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:47.970260 | orchestrator | 2026-04-11 03:17:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:47.971543 | orchestrator | 2026-04-11 03:17:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:47.971607 | orchestrator | 2026-04-11 03:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:51.021391 | orchestrator | 2026-04-11 03:17:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:51.021549 | orchestrator | 2026-04-11 03:17:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:51.021565 | orchestrator | 2026-04-11 03:17:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:54.068673 | orchestrator | 2026-04-11 03:17:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:54.070594 | orchestrator | 2026-04-11 03:17:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:54.070683 | orchestrator | 2026-04-11 03:17:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:17:57.113689 | orchestrator | 2026-04-11 03:17:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:17:57.115543 | orchestrator | 2026-04-11 03:17:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:17:57.115680 | orchestrator | 2026-04-11 03:17:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:00.169286 | orchestrator | 2026-04-11 03:18:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:00.170675 | orchestrator | 2026-04-11 03:18:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:00.170733 | orchestrator | 2026-04-11 03:18:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:03.218913 | orchestrator | 2026-04-11 03:18:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:03.221186 | orchestrator | 2026-04-11 03:18:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:03.221237 | orchestrator | 2026-04-11 03:18:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:06.262644 | orchestrator | 2026-04-11 03:18:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:06.264182 | orchestrator | 2026-04-11 03:18:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:06.264258 | orchestrator | 2026-04-11 03:18:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:09.309417 | orchestrator | 2026-04-11 03:18:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:09.312448 | orchestrator | 2026-04-11 03:18:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:09.312512 | orchestrator | 2026-04-11 03:18:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:12.355626 | orchestrator | 2026-04-11 03:18:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:12.355770 | orchestrator | 2026-04-11 03:18:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:12.355783 | orchestrator | 2026-04-11 03:18:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:15.406089 | orchestrator | 2026-04-11 03:18:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:15.406985 | orchestrator | 2026-04-11 03:18:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:15.407038 | orchestrator | 2026-04-11 03:18:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:18.458448 | orchestrator | 2026-04-11 03:18:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:18.462523 | orchestrator | 2026-04-11 03:18:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:18.463135 | orchestrator | 2026-04-11 03:18:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:21.507688 | orchestrator | 2026-04-11 03:18:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:21.509213 | orchestrator | 2026-04-11 03:18:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:21.509265 | orchestrator | 2026-04-11 03:18:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:24.560815 | orchestrator | 2026-04-11 03:18:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:24.562713 | orchestrator | 2026-04-11 03:18:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:24.562778 | orchestrator | 2026-04-11 03:18:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:27.612245 | orchestrator | 2026-04-11 03:18:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:27.615426 | orchestrator | 2026-04-11 03:18:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:27.615899 | orchestrator | 2026-04-11 03:18:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:30.663379 | orchestrator | 2026-04-11 03:18:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:30.665284 | orchestrator | 2026-04-11 03:18:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:30.665361 | orchestrator | 2026-04-11 03:18:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:33.707700 | orchestrator | 2026-04-11 03:18:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:33.708364 | orchestrator | 2026-04-11 03:18:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:33.708790 | orchestrator | 2026-04-11 03:18:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:36.753370 | orchestrator | 2026-04-11 03:18:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:36.755352 | orchestrator | 2026-04-11 03:18:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:36.755411 | orchestrator | 2026-04-11 03:18:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:39.802705 | orchestrator | 2026-04-11 03:18:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:39.803059 | orchestrator | 2026-04-11 03:18:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:39.803091 | orchestrator | 2026-04-11 03:18:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:42.842711 | orchestrator | 2026-04-11 03:18:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:42.843946 | orchestrator | 2026-04-11 03:18:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:42.844023 | orchestrator | 2026-04-11 03:18:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:45.892230 | orchestrator | 2026-04-11 03:18:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:45.895178 | orchestrator | 2026-04-11 03:18:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:45.895269 | orchestrator | 2026-04-11 03:18:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:48.948439 | orchestrator | 2026-04-11 03:18:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:48.950344 | orchestrator | 2026-04-11 03:18:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:48.950439 | orchestrator | 2026-04-11 03:18:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:51.989454 | orchestrator | 2026-04-11 03:18:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:51.991962 | orchestrator | 2026-04-11 03:18:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:51.992035 | orchestrator | 2026-04-11 03:18:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:55.039484 | orchestrator | 2026-04-11 03:18:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:55.040365 | orchestrator | 2026-04-11 03:18:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:55.040745 | orchestrator | 2026-04-11 03:18:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:18:58.084527 | orchestrator | 2026-04-11 03:18:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:18:58.085478 | orchestrator | 2026-04-11 03:18:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:18:58.085514 | orchestrator | 2026-04-11 03:18:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:01.132876 | orchestrator | 2026-04-11 03:19:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:01.135340 | orchestrator | 2026-04-11 03:19:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:01.135409 | orchestrator | 2026-04-11 03:19:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:04.184352 | orchestrator | 2026-04-11 03:19:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:04.186291 | orchestrator | 2026-04-11 03:19:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:04.186375 | orchestrator | 2026-04-11 03:19:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:07.223619 | orchestrator | 2026-04-11 03:19:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:07.224398 | orchestrator | 2026-04-11 03:19:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:07.224554 | orchestrator | 2026-04-11 03:19:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:10.275450 | orchestrator | 2026-04-11 03:19:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:10.277045 | orchestrator | 2026-04-11 03:19:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:10.277107 | orchestrator | 2026-04-11 03:19:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:13.328870 | orchestrator | 2026-04-11 03:19:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:13.331538 | orchestrator | 2026-04-11 03:19:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:13.331590 | orchestrator | 2026-04-11 03:19:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:16.380199 | orchestrator | 2026-04-11 03:19:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:16.380582 | orchestrator | 2026-04-11 03:19:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:16.380626 | orchestrator | 2026-04-11 03:19:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:19.440882 | orchestrator | 2026-04-11 03:19:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:19.443032 | orchestrator | 2026-04-11 03:19:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:19.443060 | orchestrator | 2026-04-11 03:19:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:22.491968 | orchestrator | 2026-04-11 03:19:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:22.495212 | orchestrator | 2026-04-11 03:19:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:22.495252 | orchestrator | 2026-04-11 03:19:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:25.548953 | orchestrator | 2026-04-11 03:19:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:25.549513 | orchestrator | 2026-04-11 03:19:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:25.549550 | orchestrator | 2026-04-11 03:19:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:28.602985 | orchestrator | 2026-04-11 03:19:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:28.603726 | orchestrator | 2026-04-11 03:19:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:28.603853 | orchestrator | 2026-04-11 03:19:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:31.655002 | orchestrator | 2026-04-11 03:19:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:31.656053 | orchestrator | 2026-04-11 03:19:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:31.656091 | orchestrator | 2026-04-11 03:19:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:34.707905 | orchestrator | 2026-04-11 03:19:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:34.709097 | orchestrator | 2026-04-11 03:19:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:34.709147 | orchestrator | 2026-04-11 03:19:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:37.753214 | orchestrator | 2026-04-11 03:19:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:37.754680 | orchestrator | 2026-04-11 03:19:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:37.754715 | orchestrator | 2026-04-11 03:19:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:40.805280 | orchestrator | 2026-04-11 03:19:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:40.806952 | orchestrator | 2026-04-11 03:19:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:40.807007 | orchestrator | 2026-04-11 03:19:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:43.853012 | orchestrator | 2026-04-11 03:19:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:43.854257 | orchestrator | 2026-04-11 03:19:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:43.854555 | orchestrator | 2026-04-11 03:19:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:46.904094 | orchestrator | 2026-04-11 03:19:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:46.904596 | orchestrator | 2026-04-11 03:19:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:46.904624 | orchestrator | 2026-04-11 03:19:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:49.955272 | orchestrator | 2026-04-11 03:19:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:49.958189 | orchestrator | 2026-04-11 03:19:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:49.958268 | orchestrator | 2026-04-11 03:19:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:53.005104 | orchestrator | 2026-04-11 03:19:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:53.006871 | orchestrator | 2026-04-11 03:19:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:53.006918 | orchestrator | 2026-04-11 03:19:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:56.053824 | orchestrator | 2026-04-11 03:19:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:56.055934 | orchestrator | 2026-04-11 03:19:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:56.056004 | orchestrator | 2026-04-11 03:19:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:19:59.095203 | orchestrator | 2026-04-11 03:19:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:19:59.097254 | orchestrator | 2026-04-11 03:19:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:19:59.097896 | orchestrator | 2026-04-11 03:19:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:02.141865 | orchestrator | 2026-04-11 03:20:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:02.144455 | orchestrator | 2026-04-11 03:20:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:02.144531 | orchestrator | 2026-04-11 03:20:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:05.200045 | orchestrator | 2026-04-11 03:20:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:05.201849 | orchestrator | 2026-04-11 03:20:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:05.201903 | orchestrator | 2026-04-11 03:20:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:08.252426 | orchestrator | 2026-04-11 03:20:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:08.253629 | orchestrator | 2026-04-11 03:20:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:08.253672 | orchestrator | 2026-04-11 03:20:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:11.311103 | orchestrator | 2026-04-11 03:20:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:11.313911 | orchestrator | 2026-04-11 03:20:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:11.314014 | orchestrator | 2026-04-11 03:20:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:14.365883 | orchestrator | 2026-04-11 03:20:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:14.367620 | orchestrator | 2026-04-11 03:20:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:14.367981 | orchestrator | 2026-04-11 03:20:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:17.416480 | orchestrator | 2026-04-11 03:20:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:17.416632 | orchestrator | 2026-04-11 03:20:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:17.416650 | orchestrator | 2026-04-11 03:20:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:20.459104 | orchestrator | 2026-04-11 03:20:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:20.459514 | orchestrator | 2026-04-11 03:20:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:20.459551 | orchestrator | 2026-04-11 03:20:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:23.508650 | orchestrator | 2026-04-11 03:20:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:23.510984 | orchestrator | 2026-04-11 03:20:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:23.511066 | orchestrator | 2026-04-11 03:20:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:26.563920 | orchestrator | 2026-04-11 03:20:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:26.565115 | orchestrator | 2026-04-11 03:20:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:26.565194 | orchestrator | 2026-04-11 03:20:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:29.613876 | orchestrator | 2026-04-11 03:20:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:29.614785 | orchestrator | 2026-04-11 03:20:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:29.614822 | orchestrator | 2026-04-11 03:20:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:32.657581 | orchestrator | 2026-04-11 03:20:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:32.659141 | orchestrator | 2026-04-11 03:20:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:32.659204 | orchestrator | 2026-04-11 03:20:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:35.709475 | orchestrator | 2026-04-11 03:20:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:35.710942 | orchestrator | 2026-04-11 03:20:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:35.711004 | orchestrator | 2026-04-11 03:20:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:38.762462 | orchestrator | 2026-04-11 03:20:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:38.764232 | orchestrator | 2026-04-11 03:20:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:38.764303 | orchestrator | 2026-04-11 03:20:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:41.810520 | orchestrator | 2026-04-11 03:20:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:41.813052 | orchestrator | 2026-04-11 03:20:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:41.813090 | orchestrator | 2026-04-11 03:20:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:44.857545 | orchestrator | 2026-04-11 03:20:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:44.860598 | orchestrator | 2026-04-11 03:20:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:44.860712 | orchestrator | 2026-04-11 03:20:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:47.907871 | orchestrator | 2026-04-11 03:20:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:47.909386 | orchestrator | 2026-04-11 03:20:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:47.909440 | orchestrator | 2026-04-11 03:20:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:50.953562 | orchestrator | 2026-04-11 03:20:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:50.954010 | orchestrator | 2026-04-11 03:20:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:50.954103 | orchestrator | 2026-04-11 03:20:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:54.005009 | orchestrator | 2026-04-11 03:20:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:54.006473 | orchestrator | 2026-04-11 03:20:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:54.006601 | orchestrator | 2026-04-11 03:20:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:20:57.048063 | orchestrator | 2026-04-11 03:20:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:20:57.050589 | orchestrator | 2026-04-11 03:20:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:20:57.051502 | orchestrator | 2026-04-11 03:20:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:00.095589 | orchestrator | 2026-04-11 03:21:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:00.098317 | orchestrator | 2026-04-11 03:21:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:00.098392 | orchestrator | 2026-04-11 03:21:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:03.144834 | orchestrator | 2026-04-11 03:21:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:03.146994 | orchestrator | 2026-04-11 03:21:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:03.147048 | orchestrator | 2026-04-11 03:21:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:06.198193 | orchestrator | 2026-04-11 03:21:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:06.200366 | orchestrator | 2026-04-11 03:21:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:06.200410 | orchestrator | 2026-04-11 03:21:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:09.244833 | orchestrator | 2026-04-11 03:21:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:09.246396 | orchestrator | 2026-04-11 03:21:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:09.246420 | orchestrator | 2026-04-11 03:21:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:12.295498 | orchestrator | 2026-04-11 03:21:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:12.296705 | orchestrator | 2026-04-11 03:21:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:12.296757 | orchestrator | 2026-04-11 03:21:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:15.350277 | orchestrator | 2026-04-11 03:21:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:15.352182 | orchestrator | 2026-04-11 03:21:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:15.352234 | orchestrator | 2026-04-11 03:21:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:18.402459 | orchestrator | 2026-04-11 03:21:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:18.404547 | orchestrator | 2026-04-11 03:21:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:18.404989 | orchestrator | 2026-04-11 03:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:21.458928 | orchestrator | 2026-04-11 03:21:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:21.460794 | orchestrator | 2026-04-11 03:21:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:21.460858 | orchestrator | 2026-04-11 03:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:24.513869 | orchestrator | 2026-04-11 03:21:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:24.516017 | orchestrator | 2026-04-11 03:21:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:24.516118 | orchestrator | 2026-04-11 03:21:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:27.565239 | orchestrator | 2026-04-11 03:21:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:27.566245 | orchestrator | 2026-04-11 03:21:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:27.566287 | orchestrator | 2026-04-11 03:21:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:30.611202 | orchestrator | 2026-04-11 03:21:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:30.612602 | orchestrator | 2026-04-11 03:21:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:30.612660 | orchestrator | 2026-04-11 03:21:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:33.663555 | orchestrator | 2026-04-11 03:21:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:33.665989 | orchestrator | 2026-04-11 03:21:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:33.666078 | orchestrator | 2026-04-11 03:21:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:36.715939 | orchestrator | 2026-04-11 03:21:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:36.718249 | orchestrator | 2026-04-11 03:21:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:36.718362 | orchestrator | 2026-04-11 03:21:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:39.770500 | orchestrator | 2026-04-11 03:21:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:39.772453 | orchestrator | 2026-04-11 03:21:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:39.772633 | orchestrator | 2026-04-11 03:21:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:42.815784 | orchestrator | 2026-04-11 03:21:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:42.818464 | orchestrator | 2026-04-11 03:21:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:42.818512 | orchestrator | 2026-04-11 03:21:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:45.863996 | orchestrator | 2026-04-11 03:21:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:45.866354 | orchestrator | 2026-04-11 03:21:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:45.866420 | orchestrator | 2026-04-11 03:21:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:48.915136 | orchestrator | 2026-04-11 03:21:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:48.915856 | orchestrator | 2026-04-11 03:21:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:48.915904 | orchestrator | 2026-04-11 03:21:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:51.962198 | orchestrator | 2026-04-11 03:21:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:51.966825 | orchestrator | 2026-04-11 03:21:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:51.966900 | orchestrator | 2026-04-11 03:21:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:55.014116 | orchestrator | 2026-04-11 03:21:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:55.017145 | orchestrator | 2026-04-11 03:21:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:55.017237 | orchestrator | 2026-04-11 03:21:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:21:58.071577 | orchestrator | 2026-04-11 03:21:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:21:58.073191 | orchestrator | 2026-04-11 03:21:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:21:58.073239 | orchestrator | 2026-04-11 03:21:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:01.122634 | orchestrator | 2026-04-11 03:22:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:01.124056 | orchestrator | 2026-04-11 03:22:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:01.124334 | orchestrator | 2026-04-11 03:22:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:04.173149 | orchestrator | 2026-04-11 03:22:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:04.174531 | orchestrator | 2026-04-11 03:22:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:04.174681 | orchestrator | 2026-04-11 03:22:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:07.219255 | orchestrator | 2026-04-11 03:22:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:07.220266 | orchestrator | 2026-04-11 03:22:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:07.220289 | orchestrator | 2026-04-11 03:22:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:10.270893 | orchestrator | 2026-04-11 03:22:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:10.273076 | orchestrator | 2026-04-11 03:22:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:10.273126 | orchestrator | 2026-04-11 03:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:13.334491 | orchestrator | 2026-04-11 03:22:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:13.336357 | orchestrator | 2026-04-11 03:22:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:13.336461 | orchestrator | 2026-04-11 03:22:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:16.381834 | orchestrator | 2026-04-11 03:22:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:16.384515 | orchestrator | 2026-04-11 03:22:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:16.384573 | orchestrator | 2026-04-11 03:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:19.434152 | orchestrator | 2026-04-11 03:22:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:19.436805 | orchestrator | 2026-04-11 03:22:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:19.436870 | orchestrator | 2026-04-11 03:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:22.483910 | orchestrator | 2026-04-11 03:22:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:22.486165 | orchestrator | 2026-04-11 03:22:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:22.486234 | orchestrator | 2026-04-11 03:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:25.542231 | orchestrator | 2026-04-11 03:22:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:25.542681 | orchestrator | 2026-04-11 03:22:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:25.542754 | orchestrator | 2026-04-11 03:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:28.598147 | orchestrator | 2026-04-11 03:22:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:28.600066 | orchestrator | 2026-04-11 03:22:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:28.600148 | orchestrator | 2026-04-11 03:22:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:31.654985 | orchestrator | 2026-04-11 03:22:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:31.655970 | orchestrator | 2026-04-11 03:22:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:31.656223 | orchestrator | 2026-04-11 03:22:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:34.712645 | orchestrator | 2026-04-11 03:22:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:34.713953 | orchestrator | 2026-04-11 03:22:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:34.714515 | orchestrator | 2026-04-11 03:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:22:37.760372 | orchestrator | 2026-04-11 03:22:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:22:37.762012 | orchestrator | 2026-04-11 03:22:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:22:37.762187 | orchestrator | 2026-04-11 03:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:24:40.922323 | orchestrator | 2026-04-11 03:24:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:24:40.922511 | orchestrator | 2026-04-11 03:24:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:24:40.922530 | orchestrator | 2026-04-11 03:24:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:24:43.967363 | orchestrator | 2026-04-11 03:24:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:24:43.969214 | orchestrator | 2026-04-11 03:24:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:24:43.969278 | orchestrator | 2026-04-11 03:24:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:24:47.020021 | orchestrator | 2026-04-11 03:24:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:24:47.021657 | orchestrator | 2026-04-11 03:24:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:24:47.021683 | orchestrator | 2026-04-11 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:24:50.061225 | orchestrator | 2026-04-11 03:24:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:24:50.061835 | orchestrator | 2026-04-11 03:24:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:24:50.061876 | orchestrator | 2026-04-11 03:24:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:24:53.110435 | orchestrator | 2026-04-11 03:24:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:24:53.111485 | orchestrator | 2026-04-11 03:24:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:24:53.111555 | orchestrator | 2026-04-11 03:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:24:56.159623 | orchestrator | 2026-04-11 03:24:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:24:56.161128 | orchestrator | 2026-04-11 03:24:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:24:56.161188 | orchestrator | 2026-04-11 03:24:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:24:59.204939 | orchestrator | 2026-04-11 03:24:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:24:59.207490 | orchestrator | 2026-04-11 03:24:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:24:59.207669 | orchestrator | 2026-04-11 03:24:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:02.255933 | orchestrator | 2026-04-11 03:25:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:02.258163 | orchestrator | 2026-04-11 03:25:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:02.258267 | orchestrator | 2026-04-11 03:25:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:05.301548 | orchestrator | 2026-04-11 03:25:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:05.302672 | orchestrator | 2026-04-11 03:25:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:05.302731 | orchestrator | 2026-04-11 03:25:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:08.344346 | orchestrator | 2026-04-11 03:25:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:08.346337 | orchestrator | 2026-04-11 03:25:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:08.346425 | orchestrator | 2026-04-11 03:25:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:11.388705 | orchestrator | 2026-04-11 03:25:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:11.391236 | orchestrator | 2026-04-11 03:25:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:11.391304 | orchestrator | 2026-04-11 03:25:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:14.431312 | orchestrator | 2026-04-11 03:25:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:14.432662 | orchestrator | 2026-04-11 03:25:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:14.432725 | orchestrator | 2026-04-11 03:25:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:17.480168 | orchestrator | 2026-04-11 03:25:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:17.482621 | orchestrator | 2026-04-11 03:25:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:17.482881 | orchestrator | 2026-04-11 03:25:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:20.522495 | orchestrator | 2026-04-11 03:25:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:20.523171 | orchestrator | 2026-04-11 03:25:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:20.523214 | orchestrator | 2026-04-11 03:25:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:23.572113 | orchestrator | 2026-04-11 03:25:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:23.575192 | orchestrator | 2026-04-11 03:25:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:23.575274 | orchestrator | 2026-04-11 03:25:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:26.620506 | orchestrator | 2026-04-11 03:25:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:26.622215 | orchestrator | 2026-04-11 03:25:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:26.622446 | orchestrator | 2026-04-11 03:25:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:29.662712 | orchestrator | 2026-04-11 03:25:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:29.664216 | orchestrator | 2026-04-11 03:25:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:29.664337 | orchestrator | 2026-04-11 03:25:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:32.714428 | orchestrator | 2026-04-11 03:25:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:32.716285 | orchestrator | 2026-04-11 03:25:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:32.716366 | orchestrator | 2026-04-11 03:25:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:35.757814 | orchestrator | 2026-04-11 03:25:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:35.758556 | orchestrator | 2026-04-11 03:25:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:35.758608 | orchestrator | 2026-04-11 03:25:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:38.806785 | orchestrator | 2026-04-11 03:25:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:38.808315 | orchestrator | 2026-04-11 03:25:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:38.808378 | orchestrator | 2026-04-11 03:25:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:41.865529 | orchestrator | 2026-04-11 03:25:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:41.867947 | orchestrator | 2026-04-11 03:25:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:41.868011 | orchestrator | 2026-04-11 03:25:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:44.906232 | orchestrator | 2026-04-11 03:25:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:44.907992 | orchestrator | 2026-04-11 03:25:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:44.908060 | orchestrator | 2026-04-11 03:25:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:47.951280 | orchestrator | 2026-04-11 03:25:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:47.954230 | orchestrator | 2026-04-11 03:25:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:47.954422 | orchestrator | 2026-04-11 03:25:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:50.993670 | orchestrator | 2026-04-11 03:25:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:50.993873 | orchestrator | 2026-04-11 03:25:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:50.993925 | orchestrator | 2026-04-11 03:25:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:54.042584 | orchestrator | 2026-04-11 03:25:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:54.042782 | orchestrator | 2026-04-11 03:25:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:54.042795 | orchestrator | 2026-04-11 03:25:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:25:57.078489 | orchestrator | 2026-04-11 03:25:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:25:57.080545 | orchestrator | 2026-04-11 03:25:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:25:57.080603 | orchestrator | 2026-04-11 03:25:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:00.120055 | orchestrator | 2026-04-11 03:26:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:00.120251 | orchestrator | 2026-04-11 03:26:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:00.120481 | orchestrator | 2026-04-11 03:26:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:03.172537 | orchestrator | 2026-04-11 03:26:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:03.177808 | orchestrator | 2026-04-11 03:26:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:03.178000 | orchestrator | 2026-04-11 03:26:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:06.225849 | orchestrator | 2026-04-11 03:26:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:06.227409 | orchestrator | 2026-04-11 03:26:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:06.227467 | orchestrator | 2026-04-11 03:26:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:09.277925 | orchestrator | 2026-04-11 03:26:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:09.280112 | orchestrator | 2026-04-11 03:26:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:09.280179 | orchestrator | 2026-04-11 03:26:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:12.327227 | orchestrator | 2026-04-11 03:26:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:12.331230 | orchestrator | 2026-04-11 03:26:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:12.331356 | orchestrator | 2026-04-11 03:26:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:15.371184 | orchestrator | 2026-04-11 03:26:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:15.373669 | orchestrator | 2026-04-11 03:26:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:15.373721 | orchestrator | 2026-04-11 03:26:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:18.414445 | orchestrator | 2026-04-11 03:26:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:18.415499 | orchestrator | 2026-04-11 03:26:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:18.415612 | orchestrator | 2026-04-11 03:26:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:21.462162 | orchestrator | 2026-04-11 03:26:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:21.463767 | orchestrator | 2026-04-11 03:26:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:21.463876 | orchestrator | 2026-04-11 03:26:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:24.516448 | orchestrator | 2026-04-11 03:26:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:24.516838 | orchestrator | 2026-04-11 03:26:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:24.516923 | orchestrator | 2026-04-11 03:26:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:27.558549 | orchestrator | 2026-04-11 03:26:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:27.560090 | orchestrator | 2026-04-11 03:26:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:27.560163 | orchestrator | 2026-04-11 03:26:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:30.590132 | orchestrator | 2026-04-11 03:26:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:30.590908 | orchestrator | 2026-04-11 03:26:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:30.590930 | orchestrator | 2026-04-11 03:26:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:33.640940 | orchestrator | 2026-04-11 03:26:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:33.643186 | orchestrator | 2026-04-11 03:26:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:33.643232 | orchestrator | 2026-04-11 03:26:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:36.693598 | orchestrator | 2026-04-11 03:26:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:36.695253 | orchestrator | 2026-04-11 03:26:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:36.695405 | orchestrator | 2026-04-11 03:26:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:39.741667 | orchestrator | 2026-04-11 03:26:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:39.742777 | orchestrator | 2026-04-11 03:26:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:39.742813 | orchestrator | 2026-04-11 03:26:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:42.797203 | orchestrator | 2026-04-11 03:26:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:42.798520 | orchestrator | 2026-04-11 03:26:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:42.798640 | orchestrator | 2026-04-11 03:26:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:45.849813 | orchestrator | 2026-04-11 03:26:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:45.852042 | orchestrator | 2026-04-11 03:26:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:45.852117 | orchestrator | 2026-04-11 03:26:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:48.902912 | orchestrator | 2026-04-11 03:26:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:48.904223 | orchestrator | 2026-04-11 03:26:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:48.904407 | orchestrator | 2026-04-11 03:26:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:51.950429 | orchestrator | 2026-04-11 03:26:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:51.951638 | orchestrator | 2026-04-11 03:26:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:51.951703 | orchestrator | 2026-04-11 03:26:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:54.998885 | orchestrator | 2026-04-11 03:26:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:55.001218 | orchestrator | 2026-04-11 03:26:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:55.001400 | orchestrator | 2026-04-11 03:26:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:26:58.052642 | orchestrator | 2026-04-11 03:26:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:26:58.053719 | orchestrator | 2026-04-11 03:26:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:26:58.053762 | orchestrator | 2026-04-11 03:26:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:01.090416 | orchestrator | 2026-04-11 03:27:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:01.092849 | orchestrator | 2026-04-11 03:27:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:01.092926 | orchestrator | 2026-04-11 03:27:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:04.137119 | orchestrator | 2026-04-11 03:27:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:04.139683 | orchestrator | 2026-04-11 03:27:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:04.139744 | orchestrator | 2026-04-11 03:27:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:07.176537 | orchestrator | 2026-04-11 03:27:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:07.177643 | orchestrator | 2026-04-11 03:27:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:07.177915 | orchestrator | 2026-04-11 03:27:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:10.220641 | orchestrator | 2026-04-11 03:27:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:10.221385 | orchestrator | 2026-04-11 03:27:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:10.221434 | orchestrator | 2026-04-11 03:27:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:13.271358 | orchestrator | 2026-04-11 03:27:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:13.274136 | orchestrator | 2026-04-11 03:27:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:13.274200 | orchestrator | 2026-04-11 03:27:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:16.321653 | orchestrator | 2026-04-11 03:27:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:16.324141 | orchestrator | 2026-04-11 03:27:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:16.324234 | orchestrator | 2026-04-11 03:27:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:19.370118 | orchestrator | 2026-04-11 03:27:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:19.373546 | orchestrator | 2026-04-11 03:27:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:19.373632 | orchestrator | 2026-04-11 03:27:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:22.417662 | orchestrator | 2026-04-11 03:27:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:22.418567 | orchestrator | 2026-04-11 03:27:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:22.418602 | orchestrator | 2026-04-11 03:27:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:25.465880 | orchestrator | 2026-04-11 03:27:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:25.466999 | orchestrator | 2026-04-11 03:27:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:25.467051 | orchestrator | 2026-04-11 03:27:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:28.516921 | orchestrator | 2026-04-11 03:27:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:28.519071 | orchestrator | 2026-04-11 03:27:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:28.519144 | orchestrator | 2026-04-11 03:27:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:31.564696 | orchestrator | 2026-04-11 03:27:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:31.566460 | orchestrator | 2026-04-11 03:27:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:31.566555 | orchestrator | 2026-04-11 03:27:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:34.610562 | orchestrator | 2026-04-11 03:27:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:34.611907 | orchestrator | 2026-04-11 03:27:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:34.611943 | orchestrator | 2026-04-11 03:27:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:37.661800 | orchestrator | 2026-04-11 03:27:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:37.664408 | orchestrator | 2026-04-11 03:27:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:37.664461 | orchestrator | 2026-04-11 03:27:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:40.703847 | orchestrator | 2026-04-11 03:27:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:40.704447 | orchestrator | 2026-04-11 03:27:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:40.704497 | orchestrator | 2026-04-11 03:27:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:43.750116 | orchestrator | 2026-04-11 03:27:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:43.751116 | orchestrator | 2026-04-11 03:27:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:43.751179 | orchestrator | 2026-04-11 03:27:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:46.794560 | orchestrator | 2026-04-11 03:27:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:46.797141 | orchestrator | 2026-04-11 03:27:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:46.797205 | orchestrator | 2026-04-11 03:27:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:49.845354 | orchestrator | 2026-04-11 03:27:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:49.845644 | orchestrator | 2026-04-11 03:27:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:49.845673 | orchestrator | 2026-04-11 03:27:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:52.893238 | orchestrator | 2026-04-11 03:27:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:52.894610 | orchestrator | 2026-04-11 03:27:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:52.894904 | orchestrator | 2026-04-11 03:27:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:55.941739 | orchestrator | 2026-04-11 03:27:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:55.943023 | orchestrator | 2026-04-11 03:27:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:55.943098 | orchestrator | 2026-04-11 03:27:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:27:58.985905 | orchestrator | 2026-04-11 03:27:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:27:58.987630 | orchestrator | 2026-04-11 03:27:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:27:58.987679 | orchestrator | 2026-04-11 03:27:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:02.038813 | orchestrator | 2026-04-11 03:28:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:02.041185 | orchestrator | 2026-04-11 03:28:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:02.041306 | orchestrator | 2026-04-11 03:28:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:05.085505 | orchestrator | 2026-04-11 03:28:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:05.087396 | orchestrator | 2026-04-11 03:28:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:05.087439 | orchestrator | 2026-04-11 03:28:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:08.133170 | orchestrator | 2026-04-11 03:28:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:08.135556 | orchestrator | 2026-04-11 03:28:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:08.135617 | orchestrator | 2026-04-11 03:28:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:11.185769 | orchestrator | 2026-04-11 03:28:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:11.188119 | orchestrator | 2026-04-11 03:28:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:11.188821 | orchestrator | 2026-04-11 03:28:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:14.229180 | orchestrator | 2026-04-11 03:28:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:14.230560 | orchestrator | 2026-04-11 03:28:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:14.230624 | orchestrator | 2026-04-11 03:28:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:17.276787 | orchestrator | 2026-04-11 03:28:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:17.280735 | orchestrator | 2026-04-11 03:28:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:17.280853 | orchestrator | 2026-04-11 03:28:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:20.329185 | orchestrator | 2026-04-11 03:28:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:20.332357 | orchestrator | 2026-04-11 03:28:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:20.332422 | orchestrator | 2026-04-11 03:28:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:23.373572 | orchestrator | 2026-04-11 03:28:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:23.374868 | orchestrator | 2026-04-11 03:28:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:23.374919 | orchestrator | 2026-04-11 03:28:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:26.423747 | orchestrator | 2026-04-11 03:28:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:26.425877 | orchestrator | 2026-04-11 03:28:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:26.425938 | orchestrator | 2026-04-11 03:28:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:29.473476 | orchestrator | 2026-04-11 03:28:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:29.475286 | orchestrator | 2026-04-11 03:28:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:29.475370 | orchestrator | 2026-04-11 03:28:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:32.516501 | orchestrator | 2026-04-11 03:28:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:32.520148 | orchestrator | 2026-04-11 03:28:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:32.520203 | orchestrator | 2026-04-11 03:28:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:35.571334 | orchestrator | 2026-04-11 03:28:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:35.572679 | orchestrator | 2026-04-11 03:28:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:35.572756 | orchestrator | 2026-04-11 03:28:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:38.623867 | orchestrator | 2026-04-11 03:28:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:38.625360 | orchestrator | 2026-04-11 03:28:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:38.625429 | orchestrator | 2026-04-11 03:28:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:41.671328 | orchestrator | 2026-04-11 03:28:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:41.673860 | orchestrator | 2026-04-11 03:28:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:41.673908 | orchestrator | 2026-04-11 03:28:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:44.724352 | orchestrator | 2026-04-11 03:28:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:44.725977 | orchestrator | 2026-04-11 03:28:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:44.726206 | orchestrator | 2026-04-11 03:28:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:47.776127 | orchestrator | 2026-04-11 03:28:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:47.778347 | orchestrator | 2026-04-11 03:28:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:47.778404 | orchestrator | 2026-04-11 03:28:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:50.824840 | orchestrator | 2026-04-11 03:28:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:50.827343 | orchestrator | 2026-04-11 03:28:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:50.827390 | orchestrator | 2026-04-11 03:28:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:53.870415 | orchestrator | 2026-04-11 03:28:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:53.871584 | orchestrator | 2026-04-11 03:28:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:53.871682 | orchestrator | 2026-04-11 03:28:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:56.917296 | orchestrator | 2026-04-11 03:28:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:56.919522 | orchestrator | 2026-04-11 03:28:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:56.919612 | orchestrator | 2026-04-11 03:28:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:28:59.963414 | orchestrator | 2026-04-11 03:28:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:28:59.965950 | orchestrator | 2026-04-11 03:28:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:28:59.966152 | orchestrator | 2026-04-11 03:28:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:03.006495 | orchestrator | 2026-04-11 03:29:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:03.008197 | orchestrator | 2026-04-11 03:29:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:03.008286 | orchestrator | 2026-04-11 03:29:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:06.049382 | orchestrator | 2026-04-11 03:29:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:06.050353 | orchestrator | 2026-04-11 03:29:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:06.050440 | orchestrator | 2026-04-11 03:29:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:09.093429 | orchestrator | 2026-04-11 03:29:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:09.094879 | orchestrator | 2026-04-11 03:29:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:09.094926 | orchestrator | 2026-04-11 03:29:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:12.139200 | orchestrator | 2026-04-11 03:29:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:12.140286 | orchestrator | 2026-04-11 03:29:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:12.140319 | orchestrator | 2026-04-11 03:29:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:15.182314 | orchestrator | 2026-04-11 03:29:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:15.185188 | orchestrator | 2026-04-11 03:29:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:15.185275 | orchestrator | 2026-04-11 03:29:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:18.230655 | orchestrator | 2026-04-11 03:29:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:18.231964 | orchestrator | 2026-04-11 03:29:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:18.232151 | orchestrator | 2026-04-11 03:29:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:21.278411 | orchestrator | 2026-04-11 03:29:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:21.280715 | orchestrator | 2026-04-11 03:29:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:21.280894 | orchestrator | 2026-04-11 03:29:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:24.320871 | orchestrator | 2026-04-11 03:29:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:24.322595 | orchestrator | 2026-04-11 03:29:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:24.322652 | orchestrator | 2026-04-11 03:29:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:27.366346 | orchestrator | 2026-04-11 03:29:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:27.366704 | orchestrator | 2026-04-11 03:29:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:27.366731 | orchestrator | 2026-04-11 03:29:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:30.405298 | orchestrator | 2026-04-11 03:29:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:30.407126 | orchestrator | 2026-04-11 03:29:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:30.407183 | orchestrator | 2026-04-11 03:29:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:33.448736 | orchestrator | 2026-04-11 03:29:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:33.449327 | orchestrator | 2026-04-11 03:29:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:33.449430 | orchestrator | 2026-04-11 03:29:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:36.499750 | orchestrator | 2026-04-11 03:29:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:36.501798 | orchestrator | 2026-04-11 03:29:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:36.501883 | orchestrator | 2026-04-11 03:29:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:39.546477 | orchestrator | 2026-04-11 03:29:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:39.547697 | orchestrator | 2026-04-11 03:29:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:39.547774 | orchestrator | 2026-04-11 03:29:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:42.594322 | orchestrator | 2026-04-11 03:29:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:42.594516 | orchestrator | 2026-04-11 03:29:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:42.594538 | orchestrator | 2026-04-11 03:29:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:45.654654 | orchestrator | 2026-04-11 03:29:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:45.657196 | orchestrator | 2026-04-11 03:29:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:45.657288 | orchestrator | 2026-04-11 03:29:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:48.710188 | orchestrator | 2026-04-11 03:29:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:48.711243 | orchestrator | 2026-04-11 03:29:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:48.711288 | orchestrator | 2026-04-11 03:29:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:51.755095 | orchestrator | 2026-04-11 03:29:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:51.758089 | orchestrator | 2026-04-11 03:29:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:51.758182 | orchestrator | 2026-04-11 03:29:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:54.807445 | orchestrator | 2026-04-11 03:29:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:54.812663 | orchestrator | 2026-04-11 03:29:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:54.812740 | orchestrator | 2026-04-11 03:29:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:29:57.861747 | orchestrator | 2026-04-11 03:29:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:29:57.863723 | orchestrator | 2026-04-11 03:29:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:29:57.863807 | orchestrator | 2026-04-11 03:29:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:00.905741 | orchestrator | 2026-04-11 03:30:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:00.907331 | orchestrator | 2026-04-11 03:30:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:00.907556 | orchestrator | 2026-04-11 03:30:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:03.952709 | orchestrator | 2026-04-11 03:30:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:03.953892 | orchestrator | 2026-04-11 03:30:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:03.953930 | orchestrator | 2026-04-11 03:30:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:07.003024 | orchestrator | 2026-04-11 03:30:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:07.005909 | orchestrator | 2026-04-11 03:30:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:07.005992 | orchestrator | 2026-04-11 03:30:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:10.047190 | orchestrator | 2026-04-11 03:30:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:10.049388 | orchestrator | 2026-04-11 03:30:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:10.049456 | orchestrator | 2026-04-11 03:30:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:13.096340 | orchestrator | 2026-04-11 03:30:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:13.097690 | orchestrator | 2026-04-11 03:30:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:13.097749 | orchestrator | 2026-04-11 03:30:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:16.149440 | orchestrator | 2026-04-11 03:30:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:16.151179 | orchestrator | 2026-04-11 03:30:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:16.151238 | orchestrator | 2026-04-11 03:30:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:19.197615 | orchestrator | 2026-04-11 03:30:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:19.199959 | orchestrator | 2026-04-11 03:30:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:19.201041 | orchestrator | 2026-04-11 03:30:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:22.251870 | orchestrator | 2026-04-11 03:30:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:22.253855 | orchestrator | 2026-04-11 03:30:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:22.253921 | orchestrator | 2026-04-11 03:30:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:25.299853 | orchestrator | 2026-04-11 03:30:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:25.301165 | orchestrator | 2026-04-11 03:30:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:25.301321 | orchestrator | 2026-04-11 03:30:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:28.344977 | orchestrator | 2026-04-11 03:30:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:28.346943 | orchestrator | 2026-04-11 03:30:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:28.346998 | orchestrator | 2026-04-11 03:30:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:31.390453 | orchestrator | 2026-04-11 03:30:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:31.392415 | orchestrator | 2026-04-11 03:30:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:31.392561 | orchestrator | 2026-04-11 03:30:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:34.434569 | orchestrator | 2026-04-11 03:30:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:34.436534 | orchestrator | 2026-04-11 03:30:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:34.436647 | orchestrator | 2026-04-11 03:30:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:37.484170 | orchestrator | 2026-04-11 03:30:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:37.486367 | orchestrator | 2026-04-11 03:30:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:37.486405 | orchestrator | 2026-04-11 03:30:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:40.530563 | orchestrator | 2026-04-11 03:30:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:40.530751 | orchestrator | 2026-04-11 03:30:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:40.530778 | orchestrator | 2026-04-11 03:30:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:43.569277 | orchestrator | 2026-04-11 03:30:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:43.573091 | orchestrator | 2026-04-11 03:30:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:43.573163 | orchestrator | 2026-04-11 03:30:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:46.619824 | orchestrator | 2026-04-11 03:30:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:46.621711 | orchestrator | 2026-04-11 03:30:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:46.621816 | orchestrator | 2026-04-11 03:30:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:49.668587 | orchestrator | 2026-04-11 03:30:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:49.670529 | orchestrator | 2026-04-11 03:30:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:49.670705 | orchestrator | 2026-04-11 03:30:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:52.722769 | orchestrator | 2026-04-11 03:30:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:52.724052 | orchestrator | 2026-04-11 03:30:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:52.724485 | orchestrator | 2026-04-11 03:30:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:55.769491 | orchestrator | 2026-04-11 03:30:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:55.771006 | orchestrator | 2026-04-11 03:30:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:55.771087 | orchestrator | 2026-04-11 03:30:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:30:58.812263 | orchestrator | 2026-04-11 03:30:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:30:58.815676 | orchestrator | 2026-04-11 03:30:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:30:58.815765 | orchestrator | 2026-04-11 03:30:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:01.865037 | orchestrator | 2026-04-11 03:31:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:01.867486 | orchestrator | 2026-04-11 03:31:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:01.867899 | orchestrator | 2026-04-11 03:31:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:04.915096 | orchestrator | 2026-04-11 03:31:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:04.915975 | orchestrator | 2026-04-11 03:31:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:04.916332 | orchestrator | 2026-04-11 03:31:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:07.959667 | orchestrator | 2026-04-11 03:31:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:07.962330 | orchestrator | 2026-04-11 03:31:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:07.962396 | orchestrator | 2026-04-11 03:31:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:11.014304 | orchestrator | 2026-04-11 03:31:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:11.015975 | orchestrator | 2026-04-11 03:31:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:11.016039 | orchestrator | 2026-04-11 03:31:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:14.067483 | orchestrator | 2026-04-11 03:31:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:14.070244 | orchestrator | 2026-04-11 03:31:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:14.070313 | orchestrator | 2026-04-11 03:31:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:17.117861 | orchestrator | 2026-04-11 03:31:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:17.120788 | orchestrator | 2026-04-11 03:31:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:17.120907 | orchestrator | 2026-04-11 03:31:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:20.162371 | orchestrator | 2026-04-11 03:31:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:20.164363 | orchestrator | 2026-04-11 03:31:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:20.164411 | orchestrator | 2026-04-11 03:31:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:23.211413 | orchestrator | 2026-04-11 03:31:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:23.214422 | orchestrator | 2026-04-11 03:31:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:23.214481 | orchestrator | 2026-04-11 03:31:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:26.263488 | orchestrator | 2026-04-11 03:31:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:26.266228 | orchestrator | 2026-04-11 03:31:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:26.266285 | orchestrator | 2026-04-11 03:31:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:29.314592 | orchestrator | 2026-04-11 03:31:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:29.315959 | orchestrator | 2026-04-11 03:31:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:29.316008 | orchestrator | 2026-04-11 03:31:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:32.362001 | orchestrator | 2026-04-11 03:31:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:32.363956 | orchestrator | 2026-04-11 03:31:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:32.364075 | orchestrator | 2026-04-11 03:31:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:35.414950 | orchestrator | 2026-04-11 03:31:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:35.416471 | orchestrator | 2026-04-11 03:31:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:35.416514 | orchestrator | 2026-04-11 03:31:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:38.464112 | orchestrator | 2026-04-11 03:31:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:38.466990 | orchestrator | 2026-04-11 03:31:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:38.467070 | orchestrator | 2026-04-11 03:31:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:41.515379 | orchestrator | 2026-04-11 03:31:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:41.517151 | orchestrator | 2026-04-11 03:31:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:41.517226 | orchestrator | 2026-04-11 03:31:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:44.568231 | orchestrator | 2026-04-11 03:31:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:44.570649 | orchestrator | 2026-04-11 03:31:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:44.570753 | orchestrator | 2026-04-11 03:31:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:47.612580 | orchestrator | 2026-04-11 03:31:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:47.614151 | orchestrator | 2026-04-11 03:31:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:47.614218 | orchestrator | 2026-04-11 03:31:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:50.662919 | orchestrator | 2026-04-11 03:31:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:50.663834 | orchestrator | 2026-04-11 03:31:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:50.663868 | orchestrator | 2026-04-11 03:31:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:53.718221 | orchestrator | 2026-04-11 03:31:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:53.720083 | orchestrator | 2026-04-11 03:31:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:53.720472 | orchestrator | 2026-04-11 03:31:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:56.773006 | orchestrator | 2026-04-11 03:31:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:56.775300 | orchestrator | 2026-04-11 03:31:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:56.775362 | orchestrator | 2026-04-11 03:31:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:31:59.826262 | orchestrator | 2026-04-11 03:31:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:31:59.827683 | orchestrator | 2026-04-11 03:31:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:31:59.827738 | orchestrator | 2026-04-11 03:31:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:02.880922 | orchestrator | 2026-04-11 03:32:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:02.882479 | orchestrator | 2026-04-11 03:32:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:02.882550 | orchestrator | 2026-04-11 03:32:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:05.937191 | orchestrator | 2026-04-11 03:32:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:05.939978 | orchestrator | 2026-04-11 03:32:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:05.940036 | orchestrator | 2026-04-11 03:32:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:08.988010 | orchestrator | 2026-04-11 03:32:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:08.989758 | orchestrator | 2026-04-11 03:32:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:08.989818 | orchestrator | 2026-04-11 03:32:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:12.041351 | orchestrator | 2026-04-11 03:32:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:12.043001 | orchestrator | 2026-04-11 03:32:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:12.043122 | orchestrator | 2026-04-11 03:32:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:15.089093 | orchestrator | 2026-04-11 03:32:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:15.090089 | orchestrator | 2026-04-11 03:32:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:15.090206 | orchestrator | 2026-04-11 03:32:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:18.141667 | orchestrator | 2026-04-11 03:32:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:18.142661 | orchestrator | 2026-04-11 03:32:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:18.142747 | orchestrator | 2026-04-11 03:32:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:21.202261 | orchestrator | 2026-04-11 03:32:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:21.203545 | orchestrator | 2026-04-11 03:32:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:21.203591 | orchestrator | 2026-04-11 03:32:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:24.248965 | orchestrator | 2026-04-11 03:32:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:24.250684 | orchestrator | 2026-04-11 03:32:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:24.250791 | orchestrator | 2026-04-11 03:32:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:27.298657 | orchestrator | 2026-04-11 03:32:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:27.299782 | orchestrator | 2026-04-11 03:32:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:27.299821 | orchestrator | 2026-04-11 03:32:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:30.347956 | orchestrator | 2026-04-11 03:32:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:30.348393 | orchestrator | 2026-04-11 03:32:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:30.348426 | orchestrator | 2026-04-11 03:32:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:33.404044 | orchestrator | 2026-04-11 03:32:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:33.407357 | orchestrator | 2026-04-11 03:32:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:33.407449 | orchestrator | 2026-04-11 03:32:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:36.455299 | orchestrator | 2026-04-11 03:32:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:36.456789 | orchestrator | 2026-04-11 03:32:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:36.456862 | orchestrator | 2026-04-11 03:32:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:39.505336 | orchestrator | 2026-04-11 03:32:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:39.507036 | orchestrator | 2026-04-11 03:32:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:39.507081 | orchestrator | 2026-04-11 03:32:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:42.562798 | orchestrator | 2026-04-11 03:32:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:42.563450 | orchestrator | 2026-04-11 03:32:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:42.563504 | orchestrator | 2026-04-11 03:32:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:45.604575 | orchestrator | 2026-04-11 03:32:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:45.606647 | orchestrator | 2026-04-11 03:32:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:45.606726 | orchestrator | 2026-04-11 03:32:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:48.658379 | orchestrator | 2026-04-11 03:32:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:48.661238 | orchestrator | 2026-04-11 03:32:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:48.661298 | orchestrator | 2026-04-11 03:32:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:51.713468 | orchestrator | 2026-04-11 03:32:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:51.716394 | orchestrator | 2026-04-11 03:32:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:51.716477 | orchestrator | 2026-04-11 03:32:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:54.769927 | orchestrator | 2026-04-11 03:32:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:54.773659 | orchestrator | 2026-04-11 03:32:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:54.773722 | orchestrator | 2026-04-11 03:32:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:32:57.822640 | orchestrator | 2026-04-11 03:32:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:32:57.824258 | orchestrator | 2026-04-11 03:32:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:32:57.824299 | orchestrator | 2026-04-11 03:32:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:00.877951 | orchestrator | 2026-04-11 03:33:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:00.882091 | orchestrator | 2026-04-11 03:33:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:00.882236 | orchestrator | 2026-04-11 03:33:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:03.937505 | orchestrator | 2026-04-11 03:33:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:03.940705 | orchestrator | 2026-04-11 03:33:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:03.940880 | orchestrator | 2026-04-11 03:33:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:06.980207 | orchestrator | 2026-04-11 03:33:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:06.981920 | orchestrator | 2026-04-11 03:33:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:06.981970 | orchestrator | 2026-04-11 03:33:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:10.024756 | orchestrator | 2026-04-11 03:33:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:10.028033 | orchestrator | 2026-04-11 03:33:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:10.028082 | orchestrator | 2026-04-11 03:33:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:13.076775 | orchestrator | 2026-04-11 03:33:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:13.079928 | orchestrator | 2026-04-11 03:33:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:13.080054 | orchestrator | 2026-04-11 03:33:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:16.129547 | orchestrator | 2026-04-11 03:33:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:16.132087 | orchestrator | 2026-04-11 03:33:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:16.132188 | orchestrator | 2026-04-11 03:33:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:19.182231 | orchestrator | 2026-04-11 03:33:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:19.183414 | orchestrator | 2026-04-11 03:33:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:19.183462 | orchestrator | 2026-04-11 03:33:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:22.230528 | orchestrator | 2026-04-11 03:33:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:22.231834 | orchestrator | 2026-04-11 03:33:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:22.231873 | orchestrator | 2026-04-11 03:33:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:25.278336 | orchestrator | 2026-04-11 03:33:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:25.280030 | orchestrator | 2026-04-11 03:33:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:25.280083 | orchestrator | 2026-04-11 03:33:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:28.328744 | orchestrator | 2026-04-11 03:33:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:28.330651 | orchestrator | 2026-04-11 03:33:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:28.330776 | orchestrator | 2026-04-11 03:33:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:31.374513 | orchestrator | 2026-04-11 03:33:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:31.376234 | orchestrator | 2026-04-11 03:33:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:31.376288 | orchestrator | 2026-04-11 03:33:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:34.419329 | orchestrator | 2026-04-11 03:33:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:34.421630 | orchestrator | 2026-04-11 03:33:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:34.421972 | orchestrator | 2026-04-11 03:33:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:37.469531 | orchestrator | 2026-04-11 03:33:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:37.470414 | orchestrator | 2026-04-11 03:33:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:37.470439 | orchestrator | 2026-04-11 03:33:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:40.516212 | orchestrator | 2026-04-11 03:33:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:40.518929 | orchestrator | 2026-04-11 03:33:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:40.518973 | orchestrator | 2026-04-11 03:33:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:43.567349 | orchestrator | 2026-04-11 03:33:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:43.568882 | orchestrator | 2026-04-11 03:33:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:43.568969 | orchestrator | 2026-04-11 03:33:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:46.614503 | orchestrator | 2026-04-11 03:33:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:46.617070 | orchestrator | 2026-04-11 03:33:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:46.617125 | orchestrator | 2026-04-11 03:33:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:49.663025 | orchestrator | 2026-04-11 03:33:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:49.663560 | orchestrator | 2026-04-11 03:33:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:49.663608 | orchestrator | 2026-04-11 03:33:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:52.712455 | orchestrator | 2026-04-11 03:33:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:52.713860 | orchestrator | 2026-04-11 03:33:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:52.713885 | orchestrator | 2026-04-11 03:33:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:55.755024 | orchestrator | 2026-04-11 03:33:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:55.757576 | orchestrator | 2026-04-11 03:33:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:55.757661 | orchestrator | 2026-04-11 03:33:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:33:58.806131 | orchestrator | 2026-04-11 03:33:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:33:58.807612 | orchestrator | 2026-04-11 03:33:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:33:58.807671 | orchestrator | 2026-04-11 03:33:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:01.856390 | orchestrator | 2026-04-11 03:34:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:01.860094 | orchestrator | 2026-04-11 03:34:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:01.860195 | orchestrator | 2026-04-11 03:34:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:04.910653 | orchestrator | 2026-04-11 03:34:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:04.912674 | orchestrator | 2026-04-11 03:34:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:04.912732 | orchestrator | 2026-04-11 03:34:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:07.966735 | orchestrator | 2026-04-11 03:34:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:07.971343 | orchestrator | 2026-04-11 03:34:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:07.971455 | orchestrator | 2026-04-11 03:34:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:11.028097 | orchestrator | 2026-04-11 03:34:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:11.030455 | orchestrator | 2026-04-11 03:34:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:11.030557 | orchestrator | 2026-04-11 03:34:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:14.085472 | orchestrator | 2026-04-11 03:34:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:14.087934 | orchestrator | 2026-04-11 03:34:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:14.087995 | orchestrator | 2026-04-11 03:34:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:17.141256 | orchestrator | 2026-04-11 03:34:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:17.141729 | orchestrator | 2026-04-11 03:34:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:17.141774 | orchestrator | 2026-04-11 03:34:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:20.192319 | orchestrator | 2026-04-11 03:34:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:20.194179 | orchestrator | 2026-04-11 03:34:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:20.194239 | orchestrator | 2026-04-11 03:34:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:23.249612 | orchestrator | 2026-04-11 03:34:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:23.251077 | orchestrator | 2026-04-11 03:34:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:23.251184 | orchestrator | 2026-04-11 03:34:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:26.310726 | orchestrator | 2026-04-11 03:34:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:26.313072 | orchestrator | 2026-04-11 03:34:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:26.313178 | orchestrator | 2026-04-11 03:34:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:29.365397 | orchestrator | 2026-04-11 03:34:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:29.367792 | orchestrator | 2026-04-11 03:34:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:29.367878 | orchestrator | 2026-04-11 03:34:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:32.416634 | orchestrator | 2026-04-11 03:34:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:32.418807 | orchestrator | 2026-04-11 03:34:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:32.419068 | orchestrator | 2026-04-11 03:34:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:35.465924 | orchestrator | 2026-04-11 03:34:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:35.467661 | orchestrator | 2026-04-11 03:34:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:35.467725 | orchestrator | 2026-04-11 03:34:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:38.531104 | orchestrator | 2026-04-11 03:34:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:38.533515 | orchestrator | 2026-04-11 03:34:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:38.533594 | orchestrator | 2026-04-11 03:34:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:41.580843 | orchestrator | 2026-04-11 03:34:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:41.582511 | orchestrator | 2026-04-11 03:34:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:41.582617 | orchestrator | 2026-04-11 03:34:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:44.634613 | orchestrator | 2026-04-11 03:34:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:44.635865 | orchestrator | 2026-04-11 03:34:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:44.635931 | orchestrator | 2026-04-11 03:34:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:47.687340 | orchestrator | 2026-04-11 03:34:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:47.689388 | orchestrator | 2026-04-11 03:34:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:47.689444 | orchestrator | 2026-04-11 03:34:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:50.744189 | orchestrator | 2026-04-11 03:34:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:50.745821 | orchestrator | 2026-04-11 03:34:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:50.745863 | orchestrator | 2026-04-11 03:34:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:53.803676 | orchestrator | 2026-04-11 03:34:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:53.804471 | orchestrator | 2026-04-11 03:34:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:53.804505 | orchestrator | 2026-04-11 03:34:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:56.852178 | orchestrator | 2026-04-11 03:34:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:56.852423 | orchestrator | 2026-04-11 03:34:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:56.852457 | orchestrator | 2026-04-11 03:34:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:34:59.907550 | orchestrator | 2026-04-11 03:34:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:34:59.909791 | orchestrator | 2026-04-11 03:34:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:34:59.909863 | orchestrator | 2026-04-11 03:34:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:02.960082 | orchestrator | 2026-04-11 03:35:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:02.962369 | orchestrator | 2026-04-11 03:35:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:02.963271 | orchestrator | 2026-04-11 03:35:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:06.011211 | orchestrator | 2026-04-11 03:35:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:06.011413 | orchestrator | 2026-04-11 03:35:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:06.011440 | orchestrator | 2026-04-11 03:35:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:09.065872 | orchestrator | 2026-04-11 03:35:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:09.068084 | orchestrator | 2026-04-11 03:35:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:09.068165 | orchestrator | 2026-04-11 03:35:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:12.130276 | orchestrator | 2026-04-11 03:35:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:12.131656 | orchestrator | 2026-04-11 03:35:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:12.131708 | orchestrator | 2026-04-11 03:35:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:15.176439 | orchestrator | 2026-04-11 03:35:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:15.177801 | orchestrator | 2026-04-11 03:35:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:15.177859 | orchestrator | 2026-04-11 03:35:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:18.228522 | orchestrator | 2026-04-11 03:35:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:18.230071 | orchestrator | 2026-04-11 03:35:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:18.230117 | orchestrator | 2026-04-11 03:35:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:21.279551 | orchestrator | 2026-04-11 03:35:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:21.281370 | orchestrator | 2026-04-11 03:35:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:21.281471 | orchestrator | 2026-04-11 03:35:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:24.327818 | orchestrator | 2026-04-11 03:35:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:24.329453 | orchestrator | 2026-04-11 03:35:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:24.329494 | orchestrator | 2026-04-11 03:35:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:27.372182 | orchestrator | 2026-04-11 03:35:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:27.373929 | orchestrator | 2026-04-11 03:35:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:27.374110 | orchestrator | 2026-04-11 03:35:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:30.421574 | orchestrator | 2026-04-11 03:35:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:30.421685 | orchestrator | 2026-04-11 03:35:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:30.421705 | orchestrator | 2026-04-11 03:35:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:33.462537 | orchestrator | 2026-04-11 03:35:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:33.464101 | orchestrator | 2026-04-11 03:35:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:33.464185 | orchestrator | 2026-04-11 03:35:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:36.516690 | orchestrator | 2026-04-11 03:35:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:36.519223 | orchestrator | 2026-04-11 03:35:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:36.519299 | orchestrator | 2026-04-11 03:35:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:39.568896 | orchestrator | 2026-04-11 03:35:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:39.570720 | orchestrator | 2026-04-11 03:35:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:39.570970 | orchestrator | 2026-04-11 03:35:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:42.615235 | orchestrator | 2026-04-11 03:35:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:42.617173 | orchestrator | 2026-04-11 03:35:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:42.617250 | orchestrator | 2026-04-11 03:35:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:45.662693 | orchestrator | 2026-04-11 03:35:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:45.665496 | orchestrator | 2026-04-11 03:35:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:45.665576 | orchestrator | 2026-04-11 03:35:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:48.717461 | orchestrator | 2026-04-11 03:35:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:48.718713 | orchestrator | 2026-04-11 03:35:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:48.719153 | orchestrator | 2026-04-11 03:35:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:51.755151 | orchestrator | 2026-04-11 03:35:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:51.756706 | orchestrator | 2026-04-11 03:35:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:51.756749 | orchestrator | 2026-04-11 03:35:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:54.803764 | orchestrator | 2026-04-11 03:35:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:54.806111 | orchestrator | 2026-04-11 03:35:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:54.806241 | orchestrator | 2026-04-11 03:35:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:35:57.846166 | orchestrator | 2026-04-11 03:35:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:35:57.847532 | orchestrator | 2026-04-11 03:35:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:35:57.847561 | orchestrator | 2026-04-11 03:35:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:00.896462 | orchestrator | 2026-04-11 03:36:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:00.899253 | orchestrator | 2026-04-11 03:36:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:00.899371 | orchestrator | 2026-04-11 03:36:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:03.958310 | orchestrator | 2026-04-11 03:36:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:03.961063 | orchestrator | 2026-04-11 03:36:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:03.961186 | orchestrator | 2026-04-11 03:36:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:07.006150 | orchestrator | 2026-04-11 03:36:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:07.006642 | orchestrator | 2026-04-11 03:36:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:07.006781 | orchestrator | 2026-04-11 03:36:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:10.054943 | orchestrator | 2026-04-11 03:36:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:10.058847 | orchestrator | 2026-04-11 03:36:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:10.058928 | orchestrator | 2026-04-11 03:36:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:13.094284 | orchestrator | 2026-04-11 03:36:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:13.095720 | orchestrator | 2026-04-11 03:36:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:13.095764 | orchestrator | 2026-04-11 03:36:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:16.134647 | orchestrator | 2026-04-11 03:36:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:16.136448 | orchestrator | 2026-04-11 03:36:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:16.136505 | orchestrator | 2026-04-11 03:36:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:19.178073 | orchestrator | 2026-04-11 03:36:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:19.181164 | orchestrator | 2026-04-11 03:36:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:19.181270 | orchestrator | 2026-04-11 03:36:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:22.234760 | orchestrator | 2026-04-11 03:36:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:22.235728 | orchestrator | 2026-04-11 03:36:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:22.235744 | orchestrator | 2026-04-11 03:36:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:25.282642 | orchestrator | 2026-04-11 03:36:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:25.283945 | orchestrator | 2026-04-11 03:36:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:25.284004 | orchestrator | 2026-04-11 03:36:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:28.332107 | orchestrator | 2026-04-11 03:36:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:28.333548 | orchestrator | 2026-04-11 03:36:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:28.333594 | orchestrator | 2026-04-11 03:36:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:31.379431 | orchestrator | 2026-04-11 03:36:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:31.380900 | orchestrator | 2026-04-11 03:36:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:31.380975 | orchestrator | 2026-04-11 03:36:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:34.423798 | orchestrator | 2026-04-11 03:36:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:34.424837 | orchestrator | 2026-04-11 03:36:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:34.424882 | orchestrator | 2026-04-11 03:36:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:37.473897 | orchestrator | 2026-04-11 03:36:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:37.477003 | orchestrator | 2026-04-11 03:36:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:37.477169 | orchestrator | 2026-04-11 03:36:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:40.523117 | orchestrator | 2026-04-11 03:36:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:40.524731 | orchestrator | 2026-04-11 03:36:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:40.524796 | orchestrator | 2026-04-11 03:36:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:43.569039 | orchestrator | 2026-04-11 03:36:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:43.570378 | orchestrator | 2026-04-11 03:36:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:43.570432 | orchestrator | 2026-04-11 03:36:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:46.609688 | orchestrator | 2026-04-11 03:36:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:46.611297 | orchestrator | 2026-04-11 03:36:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:46.611405 | orchestrator | 2026-04-11 03:36:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:49.654863 | orchestrator | 2026-04-11 03:36:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:49.659879 | orchestrator | 2026-04-11 03:36:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:49.659958 | orchestrator | 2026-04-11 03:36:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:52.714327 | orchestrator | 2026-04-11 03:36:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:52.716843 | orchestrator | 2026-04-11 03:36:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:52.716903 | orchestrator | 2026-04-11 03:36:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:55.769867 | orchestrator | 2026-04-11 03:36:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:55.771436 | orchestrator | 2026-04-11 03:36:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:55.771516 | orchestrator | 2026-04-11 03:36:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:36:58.822664 | orchestrator | 2026-04-11 03:36:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:36:58.824217 | orchestrator | 2026-04-11 03:36:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:36:58.824266 | orchestrator | 2026-04-11 03:36:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:01.874080 | orchestrator | 2026-04-11 03:37:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:01.876013 | orchestrator | 2026-04-11 03:37:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:01.876073 | orchestrator | 2026-04-11 03:37:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:04.921820 | orchestrator | 2026-04-11 03:37:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:04.922951 | orchestrator | 2026-04-11 03:37:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:04.923141 | orchestrator | 2026-04-11 03:37:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:07.976367 | orchestrator | 2026-04-11 03:37:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:07.977693 | orchestrator | 2026-04-11 03:37:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:07.977919 | orchestrator | 2026-04-11 03:37:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:11.028028 | orchestrator | 2026-04-11 03:37:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:11.029063 | orchestrator | 2026-04-11 03:37:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:11.029213 | orchestrator | 2026-04-11 03:37:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:14.080823 | orchestrator | 2026-04-11 03:37:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:14.080959 | orchestrator | 2026-04-11 03:37:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:14.080974 | orchestrator | 2026-04-11 03:37:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:17.130854 | orchestrator | 2026-04-11 03:37:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:17.132013 | orchestrator | 2026-04-11 03:37:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:17.132096 | orchestrator | 2026-04-11 03:37:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:20.174170 | orchestrator | 2026-04-11 03:37:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:20.176504 | orchestrator | 2026-04-11 03:37:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:20.176565 | orchestrator | 2026-04-11 03:37:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:23.227292 | orchestrator | 2026-04-11 03:37:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:23.227582 | orchestrator | 2026-04-11 03:37:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:23.227811 | orchestrator | 2026-04-11 03:37:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:26.281024 | orchestrator | 2026-04-11 03:37:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:26.282667 | orchestrator | 2026-04-11 03:37:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:26.282708 | orchestrator | 2026-04-11 03:37:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:29.331276 | orchestrator | 2026-04-11 03:37:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:29.333613 | orchestrator | 2026-04-11 03:37:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:29.333674 | orchestrator | 2026-04-11 03:37:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:32.379111 | orchestrator | 2026-04-11 03:37:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:32.381697 | orchestrator | 2026-04-11 03:37:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:32.381782 | orchestrator | 2026-04-11 03:37:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:35.429972 | orchestrator | 2026-04-11 03:37:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:35.432002 | orchestrator | 2026-04-11 03:37:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:35.432237 | orchestrator | 2026-04-11 03:37:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:38.482594 | orchestrator | 2026-04-11 03:37:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:38.484540 | orchestrator | 2026-04-11 03:37:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:38.484628 | orchestrator | 2026-04-11 03:37:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:41.539400 | orchestrator | 2026-04-11 03:37:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:41.540683 | orchestrator | 2026-04-11 03:37:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:41.540771 | orchestrator | 2026-04-11 03:37:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:44.592391 | orchestrator | 2026-04-11 03:37:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:44.593736 | orchestrator | 2026-04-11 03:37:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:44.593858 | orchestrator | 2026-04-11 03:37:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:47.642984 | orchestrator | 2026-04-11 03:37:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:47.645088 | orchestrator | 2026-04-11 03:37:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:47.645214 | orchestrator | 2026-04-11 03:37:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:50.694795 | orchestrator | 2026-04-11 03:37:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:50.696992 | orchestrator | 2026-04-11 03:37:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:50.697391 | orchestrator | 2026-04-11 03:37:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:53.745810 | orchestrator | 2026-04-11 03:37:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:53.747728 | orchestrator | 2026-04-11 03:37:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:53.747787 | orchestrator | 2026-04-11 03:37:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:56.796254 | orchestrator | 2026-04-11 03:37:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:56.797750 | orchestrator | 2026-04-11 03:37:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:56.797784 | orchestrator | 2026-04-11 03:37:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:37:59.848176 | orchestrator | 2026-04-11 03:37:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:37:59.850107 | orchestrator | 2026-04-11 03:37:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:37:59.850245 | orchestrator | 2026-04-11 03:37:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:02.894576 | orchestrator | 2026-04-11 03:38:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:02.896685 | orchestrator | 2026-04-11 03:38:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:02.896737 | orchestrator | 2026-04-11 03:38:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:05.945062 | orchestrator | 2026-04-11 03:38:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:05.946246 | orchestrator | 2026-04-11 03:38:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:05.946283 | orchestrator | 2026-04-11 03:38:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:08.994243 | orchestrator | 2026-04-11 03:38:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:08.995947 | orchestrator | 2026-04-11 03:38:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:08.995999 | orchestrator | 2026-04-11 03:38:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:12.053622 | orchestrator | 2026-04-11 03:38:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:12.056342 | orchestrator | 2026-04-11 03:38:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:12.056419 | orchestrator | 2026-04-11 03:38:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:15.101957 | orchestrator | 2026-04-11 03:38:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:15.103572 | orchestrator | 2026-04-11 03:38:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:15.103619 | orchestrator | 2026-04-11 03:38:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:18.147805 | orchestrator | 2026-04-11 03:38:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:18.149499 | orchestrator | 2026-04-11 03:38:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:18.149613 | orchestrator | 2026-04-11 03:38:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:21.205533 | orchestrator | 2026-04-11 03:38:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:21.209873 | orchestrator | 2026-04-11 03:38:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:21.209952 | orchestrator | 2026-04-11 03:38:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:24.253854 | orchestrator | 2026-04-11 03:38:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:24.254629 | orchestrator | 2026-04-11 03:38:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:24.254673 | orchestrator | 2026-04-11 03:38:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:27.307316 | orchestrator | 2026-04-11 03:38:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:27.310577 | orchestrator | 2026-04-11 03:38:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:27.310664 | orchestrator | 2026-04-11 03:38:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:30.362741 | orchestrator | 2026-04-11 03:38:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:30.364345 | orchestrator | 2026-04-11 03:38:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:30.365719 | orchestrator | 2026-04-11 03:38:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:33.415965 | orchestrator | 2026-04-11 03:38:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:33.418708 | orchestrator | 2026-04-11 03:38:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:33.418770 | orchestrator | 2026-04-11 03:38:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:36.466526 | orchestrator | 2026-04-11 03:38:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:36.468138 | orchestrator | 2026-04-11 03:38:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:36.468223 | orchestrator | 2026-04-11 03:38:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:39.507755 | orchestrator | 2026-04-11 03:38:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:39.509596 | orchestrator | 2026-04-11 03:38:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:39.509716 | orchestrator | 2026-04-11 03:38:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:42.558193 | orchestrator | 2026-04-11 03:38:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:42.560043 | orchestrator | 2026-04-11 03:38:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:42.560170 | orchestrator | 2026-04-11 03:38:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:45.610557 | orchestrator | 2026-04-11 03:38:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:45.611379 | orchestrator | 2026-04-11 03:38:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:45.611498 | orchestrator | 2026-04-11 03:38:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:48.660039 | orchestrator | 2026-04-11 03:38:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:48.661655 | orchestrator | 2026-04-11 03:38:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:48.661702 | orchestrator | 2026-04-11 03:38:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:51.711522 | orchestrator | 2026-04-11 03:38:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:51.718369 | orchestrator | 2026-04-11 03:38:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:51.718461 | orchestrator | 2026-04-11 03:38:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:54.772839 | orchestrator | 2026-04-11 03:38:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:54.775631 | orchestrator | 2026-04-11 03:38:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:54.775696 | orchestrator | 2026-04-11 03:38:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:38:57.833148 | orchestrator | 2026-04-11 03:38:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:38:57.835736 | orchestrator | 2026-04-11 03:38:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:38:57.835813 | orchestrator | 2026-04-11 03:38:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:00.875156 | orchestrator | 2026-04-11 03:39:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:00.878860 | orchestrator | 2026-04-11 03:39:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:00.878917 | orchestrator | 2026-04-11 03:39:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:03.927097 | orchestrator | 2026-04-11 03:39:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:03.928550 | orchestrator | 2026-04-11 03:39:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:03.928627 | orchestrator | 2026-04-11 03:39:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:06.983033 | orchestrator | 2026-04-11 03:39:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:06.984887 | orchestrator | 2026-04-11 03:39:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:06.985875 | orchestrator | 2026-04-11 03:39:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:10.024855 | orchestrator | 2026-04-11 03:39:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:10.028869 | orchestrator | 2026-04-11 03:39:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:10.028975 | orchestrator | 2026-04-11 03:39:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:13.072299 | orchestrator | 2026-04-11 03:39:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:13.073690 | orchestrator | 2026-04-11 03:39:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:13.073739 | orchestrator | 2026-04-11 03:39:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:16.120061 | orchestrator | 2026-04-11 03:39:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:16.122378 | orchestrator | 2026-04-11 03:39:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:16.122473 | orchestrator | 2026-04-11 03:39:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:19.168813 | orchestrator | 2026-04-11 03:39:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:19.170566 | orchestrator | 2026-04-11 03:39:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:19.170615 | orchestrator | 2026-04-11 03:39:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:22.221226 | orchestrator | 2026-04-11 03:39:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:22.222831 | orchestrator | 2026-04-11 03:39:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:22.223046 | orchestrator | 2026-04-11 03:39:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:25.274549 | orchestrator | 2026-04-11 03:39:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:25.276337 | orchestrator | 2026-04-11 03:39:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:25.276391 | orchestrator | 2026-04-11 03:39:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:28.321902 | orchestrator | 2026-04-11 03:39:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:28.324147 | orchestrator | 2026-04-11 03:39:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:28.324195 | orchestrator | 2026-04-11 03:39:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:31.371187 | orchestrator | 2026-04-11 03:39:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:31.373424 | orchestrator | 2026-04-11 03:39:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:31.373545 | orchestrator | 2026-04-11 03:39:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:34.419571 | orchestrator | 2026-04-11 03:39:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:34.420320 | orchestrator | 2026-04-11 03:39:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:34.420356 | orchestrator | 2026-04-11 03:39:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:37.470450 | orchestrator | 2026-04-11 03:39:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:37.471625 | orchestrator | 2026-04-11 03:39:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:37.471656 | orchestrator | 2026-04-11 03:39:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:40.523865 | orchestrator | 2026-04-11 03:39:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:40.524517 | orchestrator | 2026-04-11 03:39:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:40.524550 | orchestrator | 2026-04-11 03:39:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:43.576921 | orchestrator | 2026-04-11 03:39:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:43.580922 | orchestrator | 2026-04-11 03:39:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:43.581001 | orchestrator | 2026-04-11 03:39:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:46.630641 | orchestrator | 2026-04-11 03:39:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:46.632429 | orchestrator | 2026-04-11 03:39:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:46.632499 | orchestrator | 2026-04-11 03:39:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:49.678355 | orchestrator | 2026-04-11 03:39:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:49.680083 | orchestrator | 2026-04-11 03:39:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:49.680177 | orchestrator | 2026-04-11 03:39:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:52.723250 | orchestrator | 2026-04-11 03:39:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:52.724780 | orchestrator | 2026-04-11 03:39:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:52.724860 | orchestrator | 2026-04-11 03:39:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:55.774312 | orchestrator | 2026-04-11 03:39:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:55.776528 | orchestrator | 2026-04-11 03:39:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:55.776607 | orchestrator | 2026-04-11 03:39:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:39:58.827021 | orchestrator | 2026-04-11 03:39:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:39:58.828921 | orchestrator | 2026-04-11 03:39:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:39:58.828985 | orchestrator | 2026-04-11 03:39:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:01.877771 | orchestrator | 2026-04-11 03:40:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:01.878258 | orchestrator | 2026-04-11 03:40:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:01.878285 | orchestrator | 2026-04-11 03:40:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:04.924189 | orchestrator | 2026-04-11 03:40:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:04.926545 | orchestrator | 2026-04-11 03:40:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:04.926605 | orchestrator | 2026-04-11 03:40:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:07.976746 | orchestrator | 2026-04-11 03:40:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:07.980195 | orchestrator | 2026-04-11 03:40:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:07.980313 | orchestrator | 2026-04-11 03:40:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:11.025146 | orchestrator | 2026-04-11 03:40:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:11.027183 | orchestrator | 2026-04-11 03:40:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:11.027276 | orchestrator | 2026-04-11 03:40:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:14.076086 | orchestrator | 2026-04-11 03:40:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:14.078948 | orchestrator | 2026-04-11 03:40:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:14.079035 | orchestrator | 2026-04-11 03:40:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:17.133066 | orchestrator | 2026-04-11 03:40:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:17.137572 | orchestrator | 2026-04-11 03:40:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:17.137661 | orchestrator | 2026-04-11 03:40:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:20.190431 | orchestrator | 2026-04-11 03:40:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:20.193595 | orchestrator | 2026-04-11 03:40:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:20.193651 | orchestrator | 2026-04-11 03:40:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:23.239721 | orchestrator | 2026-04-11 03:40:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:23.241908 | orchestrator | 2026-04-11 03:40:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:23.241984 | orchestrator | 2026-04-11 03:40:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:26.296837 | orchestrator | 2026-04-11 03:40:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:26.300404 | orchestrator | 2026-04-11 03:40:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:26.300516 | orchestrator | 2026-04-11 03:40:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:29.354072 | orchestrator | 2026-04-11 03:40:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:29.355883 | orchestrator | 2026-04-11 03:40:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:29.356537 | orchestrator | 2026-04-11 03:40:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:32.411961 | orchestrator | 2026-04-11 03:40:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:32.415638 | orchestrator | 2026-04-11 03:40:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:32.415704 | orchestrator | 2026-04-11 03:40:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:35.470580 | orchestrator | 2026-04-11 03:40:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:35.475518 | orchestrator | 2026-04-11 03:40:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:35.475621 | orchestrator | 2026-04-11 03:40:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:38.523437 | orchestrator | 2026-04-11 03:40:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:38.525240 | orchestrator | 2026-04-11 03:40:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:38.525345 | orchestrator | 2026-04-11 03:40:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:41.570314 | orchestrator | 2026-04-11 03:40:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:41.572949 | orchestrator | 2026-04-11 03:40:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:41.573158 | orchestrator | 2026-04-11 03:40:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:44.624241 | orchestrator | 2026-04-11 03:40:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:44.627369 | orchestrator | 2026-04-11 03:40:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:44.627461 | orchestrator | 2026-04-11 03:40:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:47.677826 | orchestrator | 2026-04-11 03:40:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:47.678986 | orchestrator | 2026-04-11 03:40:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:47.679054 | orchestrator | 2026-04-11 03:40:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:50.726942 | orchestrator | 2026-04-11 03:40:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:50.729067 | orchestrator | 2026-04-11 03:40:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:50.729166 | orchestrator | 2026-04-11 03:40:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:53.782268 | orchestrator | 2026-04-11 03:40:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:53.785850 | orchestrator | 2026-04-11 03:40:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:53.785982 | orchestrator | 2026-04-11 03:40:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:56.834586 | orchestrator | 2026-04-11 03:40:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:56.835269 | orchestrator | 2026-04-11 03:40:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:56.835403 | orchestrator | 2026-04-11 03:40:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:40:59.874833 | orchestrator | 2026-04-11 03:40:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:40:59.877970 | orchestrator | 2026-04-11 03:40:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:40:59.878105 | orchestrator | 2026-04-11 03:40:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:02.937527 | orchestrator | 2026-04-11 03:41:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:02.940830 | orchestrator | 2026-04-11 03:41:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:02.940915 | orchestrator | 2026-04-11 03:41:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:05.988084 | orchestrator | 2026-04-11 03:41:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:05.990421 | orchestrator | 2026-04-11 03:41:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:05.990465 | orchestrator | 2026-04-11 03:41:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:09.037033 | orchestrator | 2026-04-11 03:41:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:09.038753 | orchestrator | 2026-04-11 03:41:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:09.038817 | orchestrator | 2026-04-11 03:41:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:12.086677 | orchestrator | 2026-04-11 03:41:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:12.086969 | orchestrator | 2026-04-11 03:41:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:12.086993 | orchestrator | 2026-04-11 03:41:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:15.129494 | orchestrator | 2026-04-11 03:41:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:15.130758 | orchestrator | 2026-04-11 03:41:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:15.130824 | orchestrator | 2026-04-11 03:41:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:18.172009 | orchestrator | 2026-04-11 03:41:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:18.172600 | orchestrator | 2026-04-11 03:41:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:18.172640 | orchestrator | 2026-04-11 03:41:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:21.228357 | orchestrator | 2026-04-11 03:41:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:21.229845 | orchestrator | 2026-04-11 03:41:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:21.229961 | orchestrator | 2026-04-11 03:41:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:24.279268 | orchestrator | 2026-04-11 03:41:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:24.280690 | orchestrator | 2026-04-11 03:41:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:24.280963 | orchestrator | 2026-04-11 03:41:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:27.320071 | orchestrator | 2026-04-11 03:41:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:27.322059 | orchestrator | 2026-04-11 03:41:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:27.322085 | orchestrator | 2026-04-11 03:41:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:30.368723 | orchestrator | 2026-04-11 03:41:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:30.370814 | orchestrator | 2026-04-11 03:41:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:30.370874 | orchestrator | 2026-04-11 03:41:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:33.419231 | orchestrator | 2026-04-11 03:41:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:33.420422 | orchestrator | 2026-04-11 03:41:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:33.420498 | orchestrator | 2026-04-11 03:41:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:36.479008 | orchestrator | 2026-04-11 03:41:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:36.481576 | orchestrator | 2026-04-11 03:41:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:36.482195 | orchestrator | 2026-04-11 03:41:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:39.533854 | orchestrator | 2026-04-11 03:41:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:39.535281 | orchestrator | 2026-04-11 03:41:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:39.535335 | orchestrator | 2026-04-11 03:41:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:42.582289 | orchestrator | 2026-04-11 03:41:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:42.583219 | orchestrator | 2026-04-11 03:41:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:42.583241 | orchestrator | 2026-04-11 03:41:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:45.627842 | orchestrator | 2026-04-11 03:41:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:45.628572 | orchestrator | 2026-04-11 03:41:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:45.628603 | orchestrator | 2026-04-11 03:41:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:48.684722 | orchestrator | 2026-04-11 03:41:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:48.686413 | orchestrator | 2026-04-11 03:41:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:48.686467 | orchestrator | 2026-04-11 03:41:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:51.740005 | orchestrator | 2026-04-11 03:41:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:51.741938 | orchestrator | 2026-04-11 03:41:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:51.742088 | orchestrator | 2026-04-11 03:41:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:54.786430 | orchestrator | 2026-04-11 03:41:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:54.788477 | orchestrator | 2026-04-11 03:41:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:54.788546 | orchestrator | 2026-04-11 03:41:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:41:57.834467 | orchestrator | 2026-04-11 03:41:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:41:57.835724 | orchestrator | 2026-04-11 03:41:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:41:57.835776 | orchestrator | 2026-04-11 03:41:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:00.882220 | orchestrator | 2026-04-11 03:42:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:00.883190 | orchestrator | 2026-04-11 03:42:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:00.883287 | orchestrator | 2026-04-11 03:42:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:03.934274 | orchestrator | 2026-04-11 03:42:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:03.936063 | orchestrator | 2026-04-11 03:42:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:03.936103 | orchestrator | 2026-04-11 03:42:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:06.987993 | orchestrator | 2026-04-11 03:42:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:06.989775 | orchestrator | 2026-04-11 03:42:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:06.989826 | orchestrator | 2026-04-11 03:42:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:10.035739 | orchestrator | 2026-04-11 03:42:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:10.036486 | orchestrator | 2026-04-11 03:42:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:10.036518 | orchestrator | 2026-04-11 03:42:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:13.078869 | orchestrator | 2026-04-11 03:42:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:13.079874 | orchestrator | 2026-04-11 03:42:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:13.079924 | orchestrator | 2026-04-11 03:42:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:16.135940 | orchestrator | 2026-04-11 03:42:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:16.138010 | orchestrator | 2026-04-11 03:42:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:16.138323 | orchestrator | 2026-04-11 03:42:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:19.188765 | orchestrator | 2026-04-11 03:42:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:19.190683 | orchestrator | 2026-04-11 03:42:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:19.190731 | orchestrator | 2026-04-11 03:42:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:22.243408 | orchestrator | 2026-04-11 03:42:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:22.245181 | orchestrator | 2026-04-11 03:42:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:22.245220 | orchestrator | 2026-04-11 03:42:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:25.295276 | orchestrator | 2026-04-11 03:42:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:25.296388 | orchestrator | 2026-04-11 03:42:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:25.296442 | orchestrator | 2026-04-11 03:42:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:28.341110 | orchestrator | 2026-04-11 03:42:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:28.341340 | orchestrator | 2026-04-11 03:42:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:28.341371 | orchestrator | 2026-04-11 03:42:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:31.388629 | orchestrator | 2026-04-11 03:42:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:31.390441 | orchestrator | 2026-04-11 03:42:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:31.390521 | orchestrator | 2026-04-11 03:42:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:34.438104 | orchestrator | 2026-04-11 03:42:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:34.438450 | orchestrator | 2026-04-11 03:42:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:34.438473 | orchestrator | 2026-04-11 03:42:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:37.485425 | orchestrator | 2026-04-11 03:42:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:37.485697 | orchestrator | 2026-04-11 03:42:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:37.485812 | orchestrator | 2026-04-11 03:42:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:40.528076 | orchestrator | 2026-04-11 03:42:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:40.529252 | orchestrator | 2026-04-11 03:42:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:40.530094 | orchestrator | 2026-04-11 03:42:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:43.580410 | orchestrator | 2026-04-11 03:42:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:43.581556 | orchestrator | 2026-04-11 03:42:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:43.581597 | orchestrator | 2026-04-11 03:42:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:46.637623 | orchestrator | 2026-04-11 03:42:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:46.641097 | orchestrator | 2026-04-11 03:42:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:46.641232 | orchestrator | 2026-04-11 03:42:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:49.691707 | orchestrator | 2026-04-11 03:42:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:49.693066 | orchestrator | 2026-04-11 03:42:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:49.693184 | orchestrator | 2026-04-11 03:42:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:52.743969 | orchestrator | 2026-04-11 03:42:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:52.744907 | orchestrator | 2026-04-11 03:42:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:52.744990 | orchestrator | 2026-04-11 03:42:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:55.796219 | orchestrator | 2026-04-11 03:42:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:55.796729 | orchestrator | 2026-04-11 03:42:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:55.796939 | orchestrator | 2026-04-11 03:42:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:42:58.848206 | orchestrator | 2026-04-11 03:42:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:42:58.852397 | orchestrator | 2026-04-11 03:42:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:42:58.852486 | orchestrator | 2026-04-11 03:42:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:01.906147 | orchestrator | 2026-04-11 03:43:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:01.907583 | orchestrator | 2026-04-11 03:43:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:01.907684 | orchestrator | 2026-04-11 03:43:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:04.956933 | orchestrator | 2026-04-11 03:43:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:04.957904 | orchestrator | 2026-04-11 03:43:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:04.958110 | orchestrator | 2026-04-11 03:43:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:08.016361 | orchestrator | 2026-04-11 03:43:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:08.019621 | orchestrator | 2026-04-11 03:43:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:08.019683 | orchestrator | 2026-04-11 03:43:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:11.074461 | orchestrator | 2026-04-11 03:43:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:11.076098 | orchestrator | 2026-04-11 03:43:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:11.076226 | orchestrator | 2026-04-11 03:43:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:14.122663 | orchestrator | 2026-04-11 03:43:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:14.125415 | orchestrator | 2026-04-11 03:43:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:14.125476 | orchestrator | 2026-04-11 03:43:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:17.167735 | orchestrator | 2026-04-11 03:43:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:17.170367 | orchestrator | 2026-04-11 03:43:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:17.170443 | orchestrator | 2026-04-11 03:43:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:20.219918 | orchestrator | 2026-04-11 03:43:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:20.221729 | orchestrator | 2026-04-11 03:43:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:20.221853 | orchestrator | 2026-04-11 03:43:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:23.278218 | orchestrator | 2026-04-11 03:43:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:23.280310 | orchestrator | 2026-04-11 03:43:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:23.280380 | orchestrator | 2026-04-11 03:43:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:26.326764 | orchestrator | 2026-04-11 03:43:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:26.329000 | orchestrator | 2026-04-11 03:43:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:26.329077 | orchestrator | 2026-04-11 03:43:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:29.379259 | orchestrator | 2026-04-11 03:43:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:29.381090 | orchestrator | 2026-04-11 03:43:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:29.381190 | orchestrator | 2026-04-11 03:43:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:32.425833 | orchestrator | 2026-04-11 03:43:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:32.426842 | orchestrator | 2026-04-11 03:43:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:32.426880 | orchestrator | 2026-04-11 03:43:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:35.470924 | orchestrator | 2026-04-11 03:43:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:35.472571 | orchestrator | 2026-04-11 03:43:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:35.472614 | orchestrator | 2026-04-11 03:43:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:38.520321 | orchestrator | 2026-04-11 03:43:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:38.521457 | orchestrator | 2026-04-11 03:43:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:38.521630 | orchestrator | 2026-04-11 03:43:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:41.565807 | orchestrator | 2026-04-11 03:43:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:41.567519 | orchestrator | 2026-04-11 03:43:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:41.567568 | orchestrator | 2026-04-11 03:43:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:44.619288 | orchestrator | 2026-04-11 03:43:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:44.622218 | orchestrator | 2026-04-11 03:43:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:44.622282 | orchestrator | 2026-04-11 03:43:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:47.672706 | orchestrator | 2026-04-11 03:43:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:47.674935 | orchestrator | 2026-04-11 03:43:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:47.674990 | orchestrator | 2026-04-11 03:43:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:50.721382 | orchestrator | 2026-04-11 03:43:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:50.722268 | orchestrator | 2026-04-11 03:43:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:50.722323 | orchestrator | 2026-04-11 03:43:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:53.772451 | orchestrator | 2026-04-11 03:43:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:53.774411 | orchestrator | 2026-04-11 03:43:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:53.774479 | orchestrator | 2026-04-11 03:43:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:56.824668 | orchestrator | 2026-04-11 03:43:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:56.827567 | orchestrator | 2026-04-11 03:43:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:56.827626 | orchestrator | 2026-04-11 03:43:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:43:59.878386 | orchestrator | 2026-04-11 03:43:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:43:59.880851 | orchestrator | 2026-04-11 03:43:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:43:59.880921 | orchestrator | 2026-04-11 03:43:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:02.926525 | orchestrator | 2026-04-11 03:44:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:02.929193 | orchestrator | 2026-04-11 03:44:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:02.929311 | orchestrator | 2026-04-11 03:44:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:05.984638 | orchestrator | 2026-04-11 03:44:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:05.985844 | orchestrator | 2026-04-11 03:44:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:05.985892 | orchestrator | 2026-04-11 03:44:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:09.035257 | orchestrator | 2026-04-11 03:44:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:09.038474 | orchestrator | 2026-04-11 03:44:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:09.038555 | orchestrator | 2026-04-11 03:44:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:12.089532 | orchestrator | 2026-04-11 03:44:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:12.090456 | orchestrator | 2026-04-11 03:44:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:12.090486 | orchestrator | 2026-04-11 03:44:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:15.141108 | orchestrator | 2026-04-11 03:44:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:15.143343 | orchestrator | 2026-04-11 03:44:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:15.143379 | orchestrator | 2026-04-11 03:44:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:18.190074 | orchestrator | 2026-04-11 03:44:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:18.191488 | orchestrator | 2026-04-11 03:44:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:18.191532 | orchestrator | 2026-04-11 03:44:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:21.247596 | orchestrator | 2026-04-11 03:44:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:21.250502 | orchestrator | 2026-04-11 03:44:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:21.250611 | orchestrator | 2026-04-11 03:44:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:24.306130 | orchestrator | 2026-04-11 03:44:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:24.308053 | orchestrator | 2026-04-11 03:44:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:24.308226 | orchestrator | 2026-04-11 03:44:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:27.347565 | orchestrator | 2026-04-11 03:44:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:27.348394 | orchestrator | 2026-04-11 03:44:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:27.348422 | orchestrator | 2026-04-11 03:44:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:30.399461 | orchestrator | 2026-04-11 03:44:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:30.402222 | orchestrator | 2026-04-11 03:44:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:30.402296 | orchestrator | 2026-04-11 03:44:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:33.449993 | orchestrator | 2026-04-11 03:44:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:33.451029 | orchestrator | 2026-04-11 03:44:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:33.451081 | orchestrator | 2026-04-11 03:44:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:36.506457 | orchestrator | 2026-04-11 03:44:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:36.508800 | orchestrator | 2026-04-11 03:44:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:36.508868 | orchestrator | 2026-04-11 03:44:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:39.554758 | orchestrator | 2026-04-11 03:44:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:39.554931 | orchestrator | 2026-04-11 03:44:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:39.554952 | orchestrator | 2026-04-11 03:44:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:42.599577 | orchestrator | 2026-04-11 03:44:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:42.602377 | orchestrator | 2026-04-11 03:44:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:42.602498 | orchestrator | 2026-04-11 03:44:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:45.645616 | orchestrator | 2026-04-11 03:44:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:45.646754 | orchestrator | 2026-04-11 03:44:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:45.647537 | orchestrator | 2026-04-11 03:44:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:48.688630 | orchestrator | 2026-04-11 03:44:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:48.689481 | orchestrator | 2026-04-11 03:44:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:48.689532 | orchestrator | 2026-04-11 03:44:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:51.735701 | orchestrator | 2026-04-11 03:44:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:51.737734 | orchestrator | 2026-04-11 03:44:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:51.738184 | orchestrator | 2026-04-11 03:44:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:54.792061 | orchestrator | 2026-04-11 03:44:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:54.794925 | orchestrator | 2026-04-11 03:44:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:54.794991 | orchestrator | 2026-04-11 03:44:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:44:57.843958 | orchestrator | 2026-04-11 03:44:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:44:57.846382 | orchestrator | 2026-04-11 03:44:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:44:57.846464 | orchestrator | 2026-04-11 03:44:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:00.890502 | orchestrator | 2026-04-11 03:45:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:00.892469 | orchestrator | 2026-04-11 03:45:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:00.892526 | orchestrator | 2026-04-11 03:45:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:03.938716 | orchestrator | 2026-04-11 03:45:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:03.940568 | orchestrator | 2026-04-11 03:45:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:03.940629 | orchestrator | 2026-04-11 03:45:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:06.977542 | orchestrator | 2026-04-11 03:45:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:06.979520 | orchestrator | 2026-04-11 03:45:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:06.979585 | orchestrator | 2026-04-11 03:45:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:10.032995 | orchestrator | 2026-04-11 03:45:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:10.035470 | orchestrator | 2026-04-11 03:45:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:10.035573 | orchestrator | 2026-04-11 03:45:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:13.081684 | orchestrator | 2026-04-11 03:45:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:13.083352 | orchestrator | 2026-04-11 03:45:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:13.083494 | orchestrator | 2026-04-11 03:45:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:16.135533 | orchestrator | 2026-04-11 03:45:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:16.137021 | orchestrator | 2026-04-11 03:45:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:16.137078 | orchestrator | 2026-04-11 03:45:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:19.185852 | orchestrator | 2026-04-11 03:45:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:19.187973 | orchestrator | 2026-04-11 03:45:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:19.188042 | orchestrator | 2026-04-11 03:45:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:22.241500 | orchestrator | 2026-04-11 03:45:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:22.244378 | orchestrator | 2026-04-11 03:45:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:22.244558 | orchestrator | 2026-04-11 03:45:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:25.294421 | orchestrator | 2026-04-11 03:45:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:25.296846 | orchestrator | 2026-04-11 03:45:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:25.296939 | orchestrator | 2026-04-11 03:45:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:28.347597 | orchestrator | 2026-04-11 03:45:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:28.350329 | orchestrator | 2026-04-11 03:45:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:28.350401 | orchestrator | 2026-04-11 03:45:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:31.399343 | orchestrator | 2026-04-11 03:45:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:31.400759 | orchestrator | 2026-04-11 03:45:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:31.400814 | orchestrator | 2026-04-11 03:45:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:34.455338 | orchestrator | 2026-04-11 03:45:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:34.458550 | orchestrator | 2026-04-11 03:45:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:34.458921 | orchestrator | 2026-04-11 03:45:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:37.512270 | orchestrator | 2026-04-11 03:45:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:37.515215 | orchestrator | 2026-04-11 03:45:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:37.515293 | orchestrator | 2026-04-11 03:45:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:40.565379 | orchestrator | 2026-04-11 03:45:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:40.567691 | orchestrator | 2026-04-11 03:45:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:40.567737 | orchestrator | 2026-04-11 03:45:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:43.622998 | orchestrator | 2026-04-11 03:45:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:43.624882 | orchestrator | 2026-04-11 03:45:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:43.624947 | orchestrator | 2026-04-11 03:45:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:46.674953 | orchestrator | 2026-04-11 03:45:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:46.677343 | orchestrator | 2026-04-11 03:45:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:46.677388 | orchestrator | 2026-04-11 03:45:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:49.724719 | orchestrator | 2026-04-11 03:45:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:49.725326 | orchestrator | 2026-04-11 03:45:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:49.725500 | orchestrator | 2026-04-11 03:45:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:52.765490 | orchestrator | 2026-04-11 03:45:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:52.766612 | orchestrator | 2026-04-11 03:45:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:52.766708 | orchestrator | 2026-04-11 03:45:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:55.812208 | orchestrator | 2026-04-11 03:45:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:55.813445 | orchestrator | 2026-04-11 03:45:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:55.813674 | orchestrator | 2026-04-11 03:45:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:45:58.862319 | orchestrator | 2026-04-11 03:45:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:45:58.863606 | orchestrator | 2026-04-11 03:45:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:45:58.863674 | orchestrator | 2026-04-11 03:45:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:01.907572 | orchestrator | 2026-04-11 03:46:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:01.910349 | orchestrator | 2026-04-11 03:46:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:01.910510 | orchestrator | 2026-04-11 03:46:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:04.947710 | orchestrator | 2026-04-11 03:46:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:04.948645 | orchestrator | 2026-04-11 03:46:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:04.948672 | orchestrator | 2026-04-11 03:46:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:07.988753 | orchestrator | 2026-04-11 03:46:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:07.989658 | orchestrator | 2026-04-11 03:46:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:07.989694 | orchestrator | 2026-04-11 03:46:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:11.040361 | orchestrator | 2026-04-11 03:46:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:11.043922 | orchestrator | 2026-04-11 03:46:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:11.044163 | orchestrator | 2026-04-11 03:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:14.089113 | orchestrator | 2026-04-11 03:46:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:14.089560 | orchestrator | 2026-04-11 03:46:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:14.089588 | orchestrator | 2026-04-11 03:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:17.133827 | orchestrator | 2026-04-11 03:46:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:17.135729 | orchestrator | 2026-04-11 03:46:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:17.135787 | orchestrator | 2026-04-11 03:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:20.179761 | orchestrator | 2026-04-11 03:46:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:20.181231 | orchestrator | 2026-04-11 03:46:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:20.181283 | orchestrator | 2026-04-11 03:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:23.223623 | orchestrator | 2026-04-11 03:46:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:23.224958 | orchestrator | 2026-04-11 03:46:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:23.225009 | orchestrator | 2026-04-11 03:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:26.269469 | orchestrator | 2026-04-11 03:46:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:26.270708 | orchestrator | 2026-04-11 03:46:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:26.270753 | orchestrator | 2026-04-11 03:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:29.314362 | orchestrator | 2026-04-11 03:46:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:29.315025 | orchestrator | 2026-04-11 03:46:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:29.315048 | orchestrator | 2026-04-11 03:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:32.361453 | orchestrator | 2026-04-11 03:46:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:32.363880 | orchestrator | 2026-04-11 03:46:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:32.363955 | orchestrator | 2026-04-11 03:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:35.418134 | orchestrator | 2026-04-11 03:46:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:35.419452 | orchestrator | 2026-04-11 03:46:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:35.419556 | orchestrator | 2026-04-11 03:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:38.459708 | orchestrator | 2026-04-11 03:46:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:38.460338 | orchestrator | 2026-04-11 03:46:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:38.460380 | orchestrator | 2026-04-11 03:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:41.502885 | orchestrator | 2026-04-11 03:46:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:41.504597 | orchestrator | 2026-04-11 03:46:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:41.504658 | orchestrator | 2026-04-11 03:46:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:44.556332 | orchestrator | 2026-04-11 03:46:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:44.558582 | orchestrator | 2026-04-11 03:46:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:44.558685 | orchestrator | 2026-04-11 03:46:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:47.610487 | orchestrator | 2026-04-11 03:46:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:47.612364 | orchestrator | 2026-04-11 03:46:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:47.612404 | orchestrator | 2026-04-11 03:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:50.660783 | orchestrator | 2026-04-11 03:46:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:50.662478 | orchestrator | 2026-04-11 03:46:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:50.662548 | orchestrator | 2026-04-11 03:46:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:53.713539 | orchestrator | 2026-04-11 03:46:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:53.716420 | orchestrator | 2026-04-11 03:46:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:53.716507 | orchestrator | 2026-04-11 03:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:56.757651 | orchestrator | 2026-04-11 03:46:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:56.763992 | orchestrator | 2026-04-11 03:46:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:56.764124 | orchestrator | 2026-04-11 03:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:46:59.810951 | orchestrator | 2026-04-11 03:46:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:46:59.815102 | orchestrator | 2026-04-11 03:46:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:46:59.815308 | orchestrator | 2026-04-11 03:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:02.863648 | orchestrator | 2026-04-11 03:47:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:02.865207 | orchestrator | 2026-04-11 03:47:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:02.865259 | orchestrator | 2026-04-11 03:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:05.919566 | orchestrator | 2026-04-11 03:47:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:05.921348 | orchestrator | 2026-04-11 03:47:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:05.921389 | orchestrator | 2026-04-11 03:47:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:08.973743 | orchestrator | 2026-04-11 03:47:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:08.975528 | orchestrator | 2026-04-11 03:47:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:08.975588 | orchestrator | 2026-04-11 03:47:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:12.038915 | orchestrator | 2026-04-11 03:47:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:12.041180 | orchestrator | 2026-04-11 03:47:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:12.041250 | orchestrator | 2026-04-11 03:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:15.087984 | orchestrator | 2026-04-11 03:47:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:15.090453 | orchestrator | 2026-04-11 03:47:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:15.090543 | orchestrator | 2026-04-11 03:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:18.150703 | orchestrator | 2026-04-11 03:47:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:18.152653 | orchestrator | 2026-04-11 03:47:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:18.152735 | orchestrator | 2026-04-11 03:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:21.204321 | orchestrator | 2026-04-11 03:47:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:21.206901 | orchestrator | 2026-04-11 03:47:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:21.207006 | orchestrator | 2026-04-11 03:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:24.246859 | orchestrator | 2026-04-11 03:47:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:24.248091 | orchestrator | 2026-04-11 03:47:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:24.248146 | orchestrator | 2026-04-11 03:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:27.305140 | orchestrator | 2026-04-11 03:47:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:27.307139 | orchestrator | 2026-04-11 03:47:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:27.307219 | orchestrator | 2026-04-11 03:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:30.352117 | orchestrator | 2026-04-11 03:47:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:30.353374 | orchestrator | 2026-04-11 03:47:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:30.353460 | orchestrator | 2026-04-11 03:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:33.400855 | orchestrator | 2026-04-11 03:47:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:33.403007 | orchestrator | 2026-04-11 03:47:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:33.403084 | orchestrator | 2026-04-11 03:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:36.452331 | orchestrator | 2026-04-11 03:47:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:36.454589 | orchestrator | 2026-04-11 03:47:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:36.454654 | orchestrator | 2026-04-11 03:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:39.505760 | orchestrator | 2026-04-11 03:47:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:39.507066 | orchestrator | 2026-04-11 03:47:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:39.507111 | orchestrator | 2026-04-11 03:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:42.556451 | orchestrator | 2026-04-11 03:47:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:42.558070 | orchestrator | 2026-04-11 03:47:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:42.558208 | orchestrator | 2026-04-11 03:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:45.614191 | orchestrator | 2026-04-11 03:47:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:45.616446 | orchestrator | 2026-04-11 03:47:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:45.616506 | orchestrator | 2026-04-11 03:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:48.671473 | orchestrator | 2026-04-11 03:47:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:48.672584 | orchestrator | 2026-04-11 03:47:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:48.672644 | orchestrator | 2026-04-11 03:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:51.717669 | orchestrator | 2026-04-11 03:47:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:51.719450 | orchestrator | 2026-04-11 03:47:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:51.719480 | orchestrator | 2026-04-11 03:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:54.769590 | orchestrator | 2026-04-11 03:47:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:54.771613 | orchestrator | 2026-04-11 03:47:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:54.771669 | orchestrator | 2026-04-11 03:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:47:57.819132 | orchestrator | 2026-04-11 03:47:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:47:57.820582 | orchestrator | 2026-04-11 03:47:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:47:57.820618 | orchestrator | 2026-04-11 03:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:00.875515 | orchestrator | 2026-04-11 03:48:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:00.877546 | orchestrator | 2026-04-11 03:48:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:00.877603 | orchestrator | 2026-04-11 03:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:03.923498 | orchestrator | 2026-04-11 03:48:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:03.925222 | orchestrator | 2026-04-11 03:48:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:03.925425 | orchestrator | 2026-04-11 03:48:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:06.977239 | orchestrator | 2026-04-11 03:48:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:06.980596 | orchestrator | 2026-04-11 03:48:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:06.980674 | orchestrator | 2026-04-11 03:48:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:10.030681 | orchestrator | 2026-04-11 03:48:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:10.033146 | orchestrator | 2026-04-11 03:48:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:10.033511 | orchestrator | 2026-04-11 03:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:13.083785 | orchestrator | 2026-04-11 03:48:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:13.084548 | orchestrator | 2026-04-11 03:48:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:13.084647 | orchestrator | 2026-04-11 03:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:16.133836 | orchestrator | 2026-04-11 03:48:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:16.134498 | orchestrator | 2026-04-11 03:48:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:16.134576 | orchestrator | 2026-04-11 03:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:19.179504 | orchestrator | 2026-04-11 03:48:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:19.181085 | orchestrator | 2026-04-11 03:48:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:19.181136 | orchestrator | 2026-04-11 03:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:22.223441 | orchestrator | 2026-04-11 03:48:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:22.225322 | orchestrator | 2026-04-11 03:48:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:22.225377 | orchestrator | 2026-04-11 03:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:25.266006 | orchestrator | 2026-04-11 03:48:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:25.266824 | orchestrator | 2026-04-11 03:48:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:25.266983 | orchestrator | 2026-04-11 03:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:28.307442 | orchestrator | 2026-04-11 03:48:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:28.307792 | orchestrator | 2026-04-11 03:48:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:28.307823 | orchestrator | 2026-04-11 03:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:31.351940 | orchestrator | 2026-04-11 03:48:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:31.352999 | orchestrator | 2026-04-11 03:48:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:31.353105 | orchestrator | 2026-04-11 03:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:34.397953 | orchestrator | 2026-04-11 03:48:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:34.399726 | orchestrator | 2026-04-11 03:48:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:34.399791 | orchestrator | 2026-04-11 03:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:37.448774 | orchestrator | 2026-04-11 03:48:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:37.451631 | orchestrator | 2026-04-11 03:48:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:37.451692 | orchestrator | 2026-04-11 03:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:40.501476 | orchestrator | 2026-04-11 03:48:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:40.502225 | orchestrator | 2026-04-11 03:48:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:40.502258 | orchestrator | 2026-04-11 03:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:43.552539 | orchestrator | 2026-04-11 03:48:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:43.553530 | orchestrator | 2026-04-11 03:48:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:43.553623 | orchestrator | 2026-04-11 03:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:46.603495 | orchestrator | 2026-04-11 03:48:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:46.605429 | orchestrator | 2026-04-11 03:48:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:46.605484 | orchestrator | 2026-04-11 03:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:49.655846 | orchestrator | 2026-04-11 03:48:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:49.658476 | orchestrator | 2026-04-11 03:48:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:49.658520 | orchestrator | 2026-04-11 03:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:52.710638 | orchestrator | 2026-04-11 03:48:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:52.711081 | orchestrator | 2026-04-11 03:48:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:52.711119 | orchestrator | 2026-04-11 03:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:55.757305 | orchestrator | 2026-04-11 03:48:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:55.757826 | orchestrator | 2026-04-11 03:48:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:55.757918 | orchestrator | 2026-04-11 03:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:48:58.809569 | orchestrator | 2026-04-11 03:48:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:48:58.809847 | orchestrator | 2026-04-11 03:48:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:48:58.809885 | orchestrator | 2026-04-11 03:48:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:01.855641 | orchestrator | 2026-04-11 03:49:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:01.856431 | orchestrator | 2026-04-11 03:49:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:01.856511 | orchestrator | 2026-04-11 03:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:04.904138 | orchestrator | 2026-04-11 03:49:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:04.905820 | orchestrator | 2026-04-11 03:49:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:04.905908 | orchestrator | 2026-04-11 03:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:07.939543 | orchestrator | 2026-04-11 03:49:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:07.943196 | orchestrator | 2026-04-11 03:49:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:07.943300 | orchestrator | 2026-04-11 03:49:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:10.996368 | orchestrator | 2026-04-11 03:49:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:10.996829 | orchestrator | 2026-04-11 03:49:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:10.996863 | orchestrator | 2026-04-11 03:49:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:14.046794 | orchestrator | 2026-04-11 03:49:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:14.050077 | orchestrator | 2026-04-11 03:49:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:14.050161 | orchestrator | 2026-04-11 03:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:17.110159 | orchestrator | 2026-04-11 03:49:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:17.111873 | orchestrator | 2026-04-11 03:49:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:17.111915 | orchestrator | 2026-04-11 03:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:20.162352 | orchestrator | 2026-04-11 03:49:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:20.164815 | orchestrator | 2026-04-11 03:49:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:20.164885 | orchestrator | 2026-04-11 03:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:23.212485 | orchestrator | 2026-04-11 03:49:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:23.215812 | orchestrator | 2026-04-11 03:49:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:23.215910 | orchestrator | 2026-04-11 03:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:26.257609 | orchestrator | 2026-04-11 03:49:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:26.259051 | orchestrator | 2026-04-11 03:49:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:26.259118 | orchestrator | 2026-04-11 03:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:29.302355 | orchestrator | 2026-04-11 03:49:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:29.305808 | orchestrator | 2026-04-11 03:49:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:29.305910 | orchestrator | 2026-04-11 03:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:32.348989 | orchestrator | 2026-04-11 03:49:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:32.349762 | orchestrator | 2026-04-11 03:49:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:32.349970 | orchestrator | 2026-04-11 03:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:35.403900 | orchestrator | 2026-04-11 03:49:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:35.406480 | orchestrator | 2026-04-11 03:49:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:35.406550 | orchestrator | 2026-04-11 03:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:38.453134 | orchestrator | 2026-04-11 03:49:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:38.455429 | orchestrator | 2026-04-11 03:49:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:38.455487 | orchestrator | 2026-04-11 03:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:41.495359 | orchestrator | 2026-04-11 03:49:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:41.496816 | orchestrator | 2026-04-11 03:49:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:41.496870 | orchestrator | 2026-04-11 03:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:44.539162 | orchestrator | 2026-04-11 03:49:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:44.541375 | orchestrator | 2026-04-11 03:49:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:44.541430 | orchestrator | 2026-04-11 03:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:47.590587 | orchestrator | 2026-04-11 03:49:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:47.591918 | orchestrator | 2026-04-11 03:49:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:47.591957 | orchestrator | 2026-04-11 03:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:50.644388 | orchestrator | 2026-04-11 03:49:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:50.645748 | orchestrator | 2026-04-11 03:49:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:50.645794 | orchestrator | 2026-04-11 03:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:53.692533 | orchestrator | 2026-04-11 03:49:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:53.694497 | orchestrator | 2026-04-11 03:49:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:53.694541 | orchestrator | 2026-04-11 03:49:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:56.744320 | orchestrator | 2026-04-11 03:49:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:56.746326 | orchestrator | 2026-04-11 03:49:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:56.746390 | orchestrator | 2026-04-11 03:49:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:49:59.786425 | orchestrator | 2026-04-11 03:49:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:49:59.789617 | orchestrator | 2026-04-11 03:49:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:49:59.789689 | orchestrator | 2026-04-11 03:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:02.838087 | orchestrator | 2026-04-11 03:50:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:02.839542 | orchestrator | 2026-04-11 03:50:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:02.839622 | orchestrator | 2026-04-11 03:50:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:05.880152 | orchestrator | 2026-04-11 03:50:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:05.880892 | orchestrator | 2026-04-11 03:50:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:05.880927 | orchestrator | 2026-04-11 03:50:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:08.925087 | orchestrator | 2026-04-11 03:50:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:08.926762 | orchestrator | 2026-04-11 03:50:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:08.927131 | orchestrator | 2026-04-11 03:50:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:11.977950 | orchestrator | 2026-04-11 03:50:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:11.980628 | orchestrator | 2026-04-11 03:50:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:11.980675 | orchestrator | 2026-04-11 03:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:15.023239 | orchestrator | 2026-04-11 03:50:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:15.024343 | orchestrator | 2026-04-11 03:50:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:15.024378 | orchestrator | 2026-04-11 03:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:18.064851 | orchestrator | 2026-04-11 03:50:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:18.066996 | orchestrator | 2026-04-11 03:50:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:18.067041 | orchestrator | 2026-04-11 03:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:21.110341 | orchestrator | 2026-04-11 03:50:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:21.113777 | orchestrator | 2026-04-11 03:50:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:21.113827 | orchestrator | 2026-04-11 03:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:24.153715 | orchestrator | 2026-04-11 03:50:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:24.156786 | orchestrator | 2026-04-11 03:50:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:24.156874 | orchestrator | 2026-04-11 03:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:27.205889 | orchestrator | 2026-04-11 03:50:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:27.207129 | orchestrator | 2026-04-11 03:50:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:27.207238 | orchestrator | 2026-04-11 03:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:30.255233 | orchestrator | 2026-04-11 03:50:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:30.256809 | orchestrator | 2026-04-11 03:50:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:30.256860 | orchestrator | 2026-04-11 03:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:33.303939 | orchestrator | 2026-04-11 03:50:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:33.305060 | orchestrator | 2026-04-11 03:50:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:33.305155 | orchestrator | 2026-04-11 03:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:36.356023 | orchestrator | 2026-04-11 03:50:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:36.358161 | orchestrator | 2026-04-11 03:50:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:36.358262 | orchestrator | 2026-04-11 03:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:39.404721 | orchestrator | 2026-04-11 03:50:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:39.406855 | orchestrator | 2026-04-11 03:50:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:39.406932 | orchestrator | 2026-04-11 03:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:42.459857 | orchestrator | 2026-04-11 03:50:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:42.461736 | orchestrator | 2026-04-11 03:50:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:42.461927 | orchestrator | 2026-04-11 03:50:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:45.504836 | orchestrator | 2026-04-11 03:50:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:45.507732 | orchestrator | 2026-04-11 03:50:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:45.507856 | orchestrator | 2026-04-11 03:50:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:48.555567 | orchestrator | 2026-04-11 03:50:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:48.556698 | orchestrator | 2026-04-11 03:50:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:48.556778 | orchestrator | 2026-04-11 03:50:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:51.605801 | orchestrator | 2026-04-11 03:50:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:51.608678 | orchestrator | 2026-04-11 03:50:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:51.608766 | orchestrator | 2026-04-11 03:50:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:54.658290 | orchestrator | 2026-04-11 03:50:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:54.660417 | orchestrator | 2026-04-11 03:50:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:54.660478 | orchestrator | 2026-04-11 03:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:50:57.702681 | orchestrator | 2026-04-11 03:50:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:50:57.703203 | orchestrator | 2026-04-11 03:50:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:50:57.703270 | orchestrator | 2026-04-11 03:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:00.756920 | orchestrator | 2026-04-11 03:51:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:00.758992 | orchestrator | 2026-04-11 03:51:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:00.759062 | orchestrator | 2026-04-11 03:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:03.807297 | orchestrator | 2026-04-11 03:51:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:03.811215 | orchestrator | 2026-04-11 03:51:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:03.811477 | orchestrator | 2026-04-11 03:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:06.863249 | orchestrator | 2026-04-11 03:51:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:06.864524 | orchestrator | 2026-04-11 03:51:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:06.864649 | orchestrator | 2026-04-11 03:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:09.912867 | orchestrator | 2026-04-11 03:51:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:09.914932 | orchestrator | 2026-04-11 03:51:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:09.915023 | orchestrator | 2026-04-11 03:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:12.965841 | orchestrator | 2026-04-11 03:51:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:12.968033 | orchestrator | 2026-04-11 03:51:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:12.968116 | orchestrator | 2026-04-11 03:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:16.017346 | orchestrator | 2026-04-11 03:51:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:16.019061 | orchestrator | 2026-04-11 03:51:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:16.019446 | orchestrator | 2026-04-11 03:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:19.071595 | orchestrator | 2026-04-11 03:51:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:19.074376 | orchestrator | 2026-04-11 03:51:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:19.074439 | orchestrator | 2026-04-11 03:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:22.114545 | orchestrator | 2026-04-11 03:51:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:22.116099 | orchestrator | 2026-04-11 03:51:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:22.116506 | orchestrator | 2026-04-11 03:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:25.164939 | orchestrator | 2026-04-11 03:51:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:25.167919 | orchestrator | 2026-04-11 03:51:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:25.168007 | orchestrator | 2026-04-11 03:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:28.216311 | orchestrator | 2026-04-11 03:51:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:28.218678 | orchestrator | 2026-04-11 03:51:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:28.218748 | orchestrator | 2026-04-11 03:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:31.266342 | orchestrator | 2026-04-11 03:51:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:31.267887 | orchestrator | 2026-04-11 03:51:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:31.267941 | orchestrator | 2026-04-11 03:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:34.314961 | orchestrator | 2026-04-11 03:51:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:34.316505 | orchestrator | 2026-04-11 03:51:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:34.316629 | orchestrator | 2026-04-11 03:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:37.363114 | orchestrator | 2026-04-11 03:51:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:37.365361 | orchestrator | 2026-04-11 03:51:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:37.365441 | orchestrator | 2026-04-11 03:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:40.415643 | orchestrator | 2026-04-11 03:51:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:40.416877 | orchestrator | 2026-04-11 03:51:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:40.417027 | orchestrator | 2026-04-11 03:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:43.461562 | orchestrator | 2026-04-11 03:51:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:43.463973 | orchestrator | 2026-04-11 03:51:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:43.464031 | orchestrator | 2026-04-11 03:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:46.516757 | orchestrator | 2026-04-11 03:51:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:46.518352 | orchestrator | 2026-04-11 03:51:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:46.518416 | orchestrator | 2026-04-11 03:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:49.573209 | orchestrator | 2026-04-11 03:51:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:49.574559 | orchestrator | 2026-04-11 03:51:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:49.574616 | orchestrator | 2026-04-11 03:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:52.618285 | orchestrator | 2026-04-11 03:51:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:52.620047 | orchestrator | 2026-04-11 03:51:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:52.620130 | orchestrator | 2026-04-11 03:51:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:55.666154 | orchestrator | 2026-04-11 03:51:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:55.666723 | orchestrator | 2026-04-11 03:51:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:55.666827 | orchestrator | 2026-04-11 03:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:51:58.713977 | orchestrator | 2026-04-11 03:51:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:51:58.714858 | orchestrator | 2026-04-11 03:51:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:51:58.714894 | orchestrator | 2026-04-11 03:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:01.766798 | orchestrator | 2026-04-11 03:52:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:01.769407 | orchestrator | 2026-04-11 03:52:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:01.769518 | orchestrator | 2026-04-11 03:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:04.815833 | orchestrator | 2026-04-11 03:52:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:04.818284 | orchestrator | 2026-04-11 03:52:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:04.818359 | orchestrator | 2026-04-11 03:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:07.869964 | orchestrator | 2026-04-11 03:52:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:07.873040 | orchestrator | 2026-04-11 03:52:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:07.873150 | orchestrator | 2026-04-11 03:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:10.914547 | orchestrator | 2026-04-11 03:52:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:10.916093 | orchestrator | 2026-04-11 03:52:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:10.916157 | orchestrator | 2026-04-11 03:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:13.965565 | orchestrator | 2026-04-11 03:52:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:13.968149 | orchestrator | 2026-04-11 03:52:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:13.968299 | orchestrator | 2026-04-11 03:52:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:17.016455 | orchestrator | 2026-04-11 03:52:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:17.019131 | orchestrator | 2026-04-11 03:52:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:17.019360 | orchestrator | 2026-04-11 03:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:20.070381 | orchestrator | 2026-04-11 03:52:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:20.070978 | orchestrator | 2026-04-11 03:52:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:20.071269 | orchestrator | 2026-04-11 03:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:23.112975 | orchestrator | 2026-04-11 03:52:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:23.115431 | orchestrator | 2026-04-11 03:52:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:23.115494 | orchestrator | 2026-04-11 03:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:26.162495 | orchestrator | 2026-04-11 03:52:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:26.163150 | orchestrator | 2026-04-11 03:52:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:26.163238 | orchestrator | 2026-04-11 03:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:29.202200 | orchestrator | 2026-04-11 03:52:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:29.203459 | orchestrator | 2026-04-11 03:52:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:29.203506 | orchestrator | 2026-04-11 03:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:32.244941 | orchestrator | 2026-04-11 03:52:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:32.246650 | orchestrator | 2026-04-11 03:52:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:32.246741 | orchestrator | 2026-04-11 03:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:35.295175 | orchestrator | 2026-04-11 03:52:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:35.296771 | orchestrator | 2026-04-11 03:52:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:35.296829 | orchestrator | 2026-04-11 03:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:38.351146 | orchestrator | 2026-04-11 03:52:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:38.353475 | orchestrator | 2026-04-11 03:52:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:38.353561 | orchestrator | 2026-04-11 03:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:41.399787 | orchestrator | 2026-04-11 03:52:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:41.402841 | orchestrator | 2026-04-11 03:52:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:41.402903 | orchestrator | 2026-04-11 03:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:44.449009 | orchestrator | 2026-04-11 03:52:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:44.450628 | orchestrator | 2026-04-11 03:52:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:44.450681 | orchestrator | 2026-04-11 03:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:47.494564 | orchestrator | 2026-04-11 03:52:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:47.496650 | orchestrator | 2026-04-11 03:52:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:47.496705 | orchestrator | 2026-04-11 03:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:50.544627 | orchestrator | 2026-04-11 03:52:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:50.547112 | orchestrator | 2026-04-11 03:52:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:50.547215 | orchestrator | 2026-04-11 03:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:53.590477 | orchestrator | 2026-04-11 03:52:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:53.592608 | orchestrator | 2026-04-11 03:52:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:53.592671 | orchestrator | 2026-04-11 03:52:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:56.645976 | orchestrator | 2026-04-11 03:52:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:56.648279 | orchestrator | 2026-04-11 03:52:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:56.648333 | orchestrator | 2026-04-11 03:52:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:52:59.699195 | orchestrator | 2026-04-11 03:52:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:52:59.700926 | orchestrator | 2026-04-11 03:52:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:52:59.701000 | orchestrator | 2026-04-11 03:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:02.740490 | orchestrator | 2026-04-11 03:53:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:02.742604 | orchestrator | 2026-04-11 03:53:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:02.742711 | orchestrator | 2026-04-11 03:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:05.793624 | orchestrator | 2026-04-11 03:53:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:05.797686 | orchestrator | 2026-04-11 03:53:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:05.797759 | orchestrator | 2026-04-11 03:53:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:08.840457 | orchestrator | 2026-04-11 03:53:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:08.842543 | orchestrator | 2026-04-11 03:53:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:08.842646 | orchestrator | 2026-04-11 03:53:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:11.895482 | orchestrator | 2026-04-11 03:53:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:11.896457 | orchestrator | 2026-04-11 03:53:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:11.896533 | orchestrator | 2026-04-11 03:53:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:14.941146 | orchestrator | 2026-04-11 03:53:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:14.944104 | orchestrator | 2026-04-11 03:53:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:14.944188 | orchestrator | 2026-04-11 03:53:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:17.991399 | orchestrator | 2026-04-11 03:53:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:17.993322 | orchestrator | 2026-04-11 03:53:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:17.993384 | orchestrator | 2026-04-11 03:53:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:21.041763 | orchestrator | 2026-04-11 03:53:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:21.045491 | orchestrator | 2026-04-11 03:53:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:21.045581 | orchestrator | 2026-04-11 03:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:24.085630 | orchestrator | 2026-04-11 03:53:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:24.086884 | orchestrator | 2026-04-11 03:53:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:24.086953 | orchestrator | 2026-04-11 03:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:27.133682 | orchestrator | 2026-04-11 03:53:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:27.136764 | orchestrator | 2026-04-11 03:53:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:27.136834 | orchestrator | 2026-04-11 03:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:30.185046 | orchestrator | 2026-04-11 03:53:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:30.187110 | orchestrator | 2026-04-11 03:53:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:30.187185 | orchestrator | 2026-04-11 03:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:33.231304 | orchestrator | 2026-04-11 03:53:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:33.233188 | orchestrator | 2026-04-11 03:53:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:33.233268 | orchestrator | 2026-04-11 03:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:36.278527 | orchestrator | 2026-04-11 03:53:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:36.279073 | orchestrator | 2026-04-11 03:53:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:36.279115 | orchestrator | 2026-04-11 03:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:39.313318 | orchestrator | 2026-04-11 03:53:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:53:39.316452 | orchestrator | 2026-04-11 03:53:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:53:39.316516 | orchestrator | 2026-04-11 03:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:53:42.363274 | orchestrator | 2026-04-11 03:53:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:55:42.468515 | orchestrator | 2026-04-11 03:55:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:55:42.468627 | orchestrator | 2026-04-11 03:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:55:45.518250 | orchestrator | 2026-04-11 03:55:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:55:45.520463 | orchestrator | 2026-04-11 03:55:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:55:45.520578 | orchestrator | 2026-04-11 03:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:55:48.560596 | orchestrator | 2026-04-11 03:55:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:55:48.563683 | orchestrator | 2026-04-11 03:55:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:55:48.564172 | orchestrator | 2026-04-11 03:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:55:51.611132 | orchestrator | 2026-04-11 03:55:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:55:51.611841 | orchestrator | 2026-04-11 03:55:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:55:51.611897 | orchestrator | 2026-04-11 03:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:55:54.656693 | orchestrator | 2026-04-11 03:55:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:55:54.659421 | orchestrator | 2026-04-11 03:55:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:55:54.659576 | orchestrator | 2026-04-11 03:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:55:57.708195 | orchestrator | 2026-04-11 03:55:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:55:57.709224 | orchestrator | 2026-04-11 03:55:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:55:57.709374 | orchestrator | 2026-04-11 03:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:00.758630 | orchestrator | 2026-04-11 03:56:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:00.761599 | orchestrator | 2026-04-11 03:56:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:00.761695 | orchestrator | 2026-04-11 03:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:03.812209 | orchestrator | 2026-04-11 03:56:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:03.812435 | orchestrator | 2026-04-11 03:56:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:03.812452 | orchestrator | 2026-04-11 03:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:06.861501 | orchestrator | 2026-04-11 03:56:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:06.863387 | orchestrator | 2026-04-11 03:56:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:06.863437 | orchestrator | 2026-04-11 03:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:09.906104 | orchestrator | 2026-04-11 03:56:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:09.907995 | orchestrator | 2026-04-11 03:56:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:09.908068 | orchestrator | 2026-04-11 03:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:12.946695 | orchestrator | 2026-04-11 03:56:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:12.947519 | orchestrator | 2026-04-11 03:56:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:12.947539 | orchestrator | 2026-04-11 03:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:15.987320 | orchestrator | 2026-04-11 03:56:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:15.988661 | orchestrator | 2026-04-11 03:56:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:15.988698 | orchestrator | 2026-04-11 03:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:19.036175 | orchestrator | 2026-04-11 03:56:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:19.036833 | orchestrator | 2026-04-11 03:56:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:19.036896 | orchestrator | 2026-04-11 03:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:22.076363 | orchestrator | 2026-04-11 03:56:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:22.076889 | orchestrator | 2026-04-11 03:56:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:22.076926 | orchestrator | 2026-04-11 03:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:25.111798 | orchestrator | 2026-04-11 03:56:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:25.114311 | orchestrator | 2026-04-11 03:56:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:25.114367 | orchestrator | 2026-04-11 03:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:28.159622 | orchestrator | 2026-04-11 03:56:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:28.161471 | orchestrator | 2026-04-11 03:56:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:28.161529 | orchestrator | 2026-04-11 03:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:31.212358 | orchestrator | 2026-04-11 03:56:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:31.214165 | orchestrator | 2026-04-11 03:56:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:31.214283 | orchestrator | 2026-04-11 03:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:34.263084 | orchestrator | 2026-04-11 03:56:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:34.265384 | orchestrator | 2026-04-11 03:56:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:34.265441 | orchestrator | 2026-04-11 03:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:37.317935 | orchestrator | 2026-04-11 03:56:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:37.319963 | orchestrator | 2026-04-11 03:56:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:37.320012 | orchestrator | 2026-04-11 03:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:40.373641 | orchestrator | 2026-04-11 03:56:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:40.376180 | orchestrator | 2026-04-11 03:56:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:40.376216 | orchestrator | 2026-04-11 03:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:43.426451 | orchestrator | 2026-04-11 03:56:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:43.427756 | orchestrator | 2026-04-11 03:56:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:43.427913 | orchestrator | 2026-04-11 03:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:46.468794 | orchestrator | 2026-04-11 03:56:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:46.472067 | orchestrator | 2026-04-11 03:56:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:46.472160 | orchestrator | 2026-04-11 03:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:49.510795 | orchestrator | 2026-04-11 03:56:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:49.511292 | orchestrator | 2026-04-11 03:56:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:49.511328 | orchestrator | 2026-04-11 03:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:52.556890 | orchestrator | 2026-04-11 03:56:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:52.557609 | orchestrator | 2026-04-11 03:56:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:52.558248 | orchestrator | 2026-04-11 03:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:55.605927 | orchestrator | 2026-04-11 03:56:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:55.608749 | orchestrator | 2026-04-11 03:56:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:55.608851 | orchestrator | 2026-04-11 03:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:56:58.658366 | orchestrator | 2026-04-11 03:56:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:56:58.659333 | orchestrator | 2026-04-11 03:56:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:56:58.659374 | orchestrator | 2026-04-11 03:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:01.708707 | orchestrator | 2026-04-11 03:57:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:01.711275 | orchestrator | 2026-04-11 03:57:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:01.711352 | orchestrator | 2026-04-11 03:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:04.759907 | orchestrator | 2026-04-11 03:57:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:04.762474 | orchestrator | 2026-04-11 03:57:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:04.762525 | orchestrator | 2026-04-11 03:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:07.811862 | orchestrator | 2026-04-11 03:57:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:07.814795 | orchestrator | 2026-04-11 03:57:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:07.814912 | orchestrator | 2026-04-11 03:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:10.868304 | orchestrator | 2026-04-11 03:57:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:10.872320 | orchestrator | 2026-04-11 03:57:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:10.872389 | orchestrator | 2026-04-11 03:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:13.920128 | orchestrator | 2026-04-11 03:57:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:13.922677 | orchestrator | 2026-04-11 03:57:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:13.922763 | orchestrator | 2026-04-11 03:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:16.972880 | orchestrator | 2026-04-11 03:57:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:16.975228 | orchestrator | 2026-04-11 03:57:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:16.975303 | orchestrator | 2026-04-11 03:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:20.027523 | orchestrator | 2026-04-11 03:57:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:20.027601 | orchestrator | 2026-04-11 03:57:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:20.027608 | orchestrator | 2026-04-11 03:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:23.068098 | orchestrator | 2026-04-11 03:57:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:23.069321 | orchestrator | 2026-04-11 03:57:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:23.069389 | orchestrator | 2026-04-11 03:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:26.117040 | orchestrator | 2026-04-11 03:57:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:26.118409 | orchestrator | 2026-04-11 03:57:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:26.118455 | orchestrator | 2026-04-11 03:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:29.166855 | orchestrator | 2026-04-11 03:57:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:29.169033 | orchestrator | 2026-04-11 03:57:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:29.169094 | orchestrator | 2026-04-11 03:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:32.216326 | orchestrator | 2026-04-11 03:57:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:32.217784 | orchestrator | 2026-04-11 03:57:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:32.218196 | orchestrator | 2026-04-11 03:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:35.263283 | orchestrator | 2026-04-11 03:57:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:35.265617 | orchestrator | 2026-04-11 03:57:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:35.265669 | orchestrator | 2026-04-11 03:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:38.310913 | orchestrator | 2026-04-11 03:57:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:38.312334 | orchestrator | 2026-04-11 03:57:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:38.312426 | orchestrator | 2026-04-11 03:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:41.355731 | orchestrator | 2026-04-11 03:57:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:41.357615 | orchestrator | 2026-04-11 03:57:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:41.357676 | orchestrator | 2026-04-11 03:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:44.401712 | orchestrator | 2026-04-11 03:57:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:44.404117 | orchestrator | 2026-04-11 03:57:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:44.404191 | orchestrator | 2026-04-11 03:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:47.446853 | orchestrator | 2026-04-11 03:57:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:47.448906 | orchestrator | 2026-04-11 03:57:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:47.448996 | orchestrator | 2026-04-11 03:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:50.493153 | orchestrator | 2026-04-11 03:57:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:50.495135 | orchestrator | 2026-04-11 03:57:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:50.495183 | orchestrator | 2026-04-11 03:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:53.542506 | orchestrator | 2026-04-11 03:57:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:53.544940 | orchestrator | 2026-04-11 03:57:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:53.544974 | orchestrator | 2026-04-11 03:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:56.589858 | orchestrator | 2026-04-11 03:57:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:56.592164 | orchestrator | 2026-04-11 03:57:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:56.592211 | orchestrator | 2026-04-11 03:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:57:59.637435 | orchestrator | 2026-04-11 03:57:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:57:59.639707 | orchestrator | 2026-04-11 03:57:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:57:59.639806 | orchestrator | 2026-04-11 03:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:02.685171 | orchestrator | 2026-04-11 03:58:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:02.685461 | orchestrator | 2026-04-11 03:58:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:02.685491 | orchestrator | 2026-04-11 03:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:05.740168 | orchestrator | 2026-04-11 03:58:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:05.742538 | orchestrator | 2026-04-11 03:58:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:05.742611 | orchestrator | 2026-04-11 03:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:08.790176 | orchestrator | 2026-04-11 03:58:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:08.794365 | orchestrator | 2026-04-11 03:58:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:08.794433 | orchestrator | 2026-04-11 03:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:11.841177 | orchestrator | 2026-04-11 03:58:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:11.842727 | orchestrator | 2026-04-11 03:58:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:11.842837 | orchestrator | 2026-04-11 03:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:14.880161 | orchestrator | 2026-04-11 03:58:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:14.880940 | orchestrator | 2026-04-11 03:58:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:14.880993 | orchestrator | 2026-04-11 03:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:17.925056 | orchestrator | 2026-04-11 03:58:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:17.926851 | orchestrator | 2026-04-11 03:58:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:17.926957 | orchestrator | 2026-04-11 03:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:20.974858 | orchestrator | 2026-04-11 03:58:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:20.978047 | orchestrator | 2026-04-11 03:58:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:20.978102 | orchestrator | 2026-04-11 03:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:24.030370 | orchestrator | 2026-04-11 03:58:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:24.031492 | orchestrator | 2026-04-11 03:58:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:24.031578 | orchestrator | 2026-04-11 03:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:27.071001 | orchestrator | 2026-04-11 03:58:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:27.072089 | orchestrator | 2026-04-11 03:58:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:27.072164 | orchestrator | 2026-04-11 03:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:30.120574 | orchestrator | 2026-04-11 03:58:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:30.123605 | orchestrator | 2026-04-11 03:58:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:30.123671 | orchestrator | 2026-04-11 03:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:33.173153 | orchestrator | 2026-04-11 03:58:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:33.175463 | orchestrator | 2026-04-11 03:58:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:33.175520 | orchestrator | 2026-04-11 03:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:36.214710 | orchestrator | 2026-04-11 03:58:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:36.216618 | orchestrator | 2026-04-11 03:58:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:36.216665 | orchestrator | 2026-04-11 03:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:39.257359 | orchestrator | 2026-04-11 03:58:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:39.258924 | orchestrator | 2026-04-11 03:58:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:39.258977 | orchestrator | 2026-04-11 03:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:42.302311 | orchestrator | 2026-04-11 03:58:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:42.304148 | orchestrator | 2026-04-11 03:58:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:42.304223 | orchestrator | 2026-04-11 03:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:45.352663 | orchestrator | 2026-04-11 03:58:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:45.353533 | orchestrator | 2026-04-11 03:58:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:45.353594 | orchestrator | 2026-04-11 03:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:48.397256 | orchestrator | 2026-04-11 03:58:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:48.398548 | orchestrator | 2026-04-11 03:58:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:48.398594 | orchestrator | 2026-04-11 03:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:51.465913 | orchestrator | 2026-04-11 03:58:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:51.467543 | orchestrator | 2026-04-11 03:58:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:51.467695 | orchestrator | 2026-04-11 03:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:54.509566 | orchestrator | 2026-04-11 03:58:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:54.510112 | orchestrator | 2026-04-11 03:58:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:54.510365 | orchestrator | 2026-04-11 03:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:58:57.561019 | orchestrator | 2026-04-11 03:58:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:58:57.563588 | orchestrator | 2026-04-11 03:58:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:58:57.563653 | orchestrator | 2026-04-11 03:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:00.614396 | orchestrator | 2026-04-11 03:59:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:00.615919 | orchestrator | 2026-04-11 03:59:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:00.615992 | orchestrator | 2026-04-11 03:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:03.671640 | orchestrator | 2026-04-11 03:59:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:03.674125 | orchestrator | 2026-04-11 03:59:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:03.674167 | orchestrator | 2026-04-11 03:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:06.721988 | orchestrator | 2026-04-11 03:59:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:06.723328 | orchestrator | 2026-04-11 03:59:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:06.723469 | orchestrator | 2026-04-11 03:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:09.779838 | orchestrator | 2026-04-11 03:59:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:09.780074 | orchestrator | 2026-04-11 03:59:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:09.780104 | orchestrator | 2026-04-11 03:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:12.831520 | orchestrator | 2026-04-11 03:59:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:12.834506 | orchestrator | 2026-04-11 03:59:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:12.834575 | orchestrator | 2026-04-11 03:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:15.876405 | orchestrator | 2026-04-11 03:59:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:15.877383 | orchestrator | 2026-04-11 03:59:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:15.877429 | orchestrator | 2026-04-11 03:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:18.927024 | orchestrator | 2026-04-11 03:59:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:18.928553 | orchestrator | 2026-04-11 03:59:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:18.929082 | orchestrator | 2026-04-11 03:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:21.983443 | orchestrator | 2026-04-11 03:59:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:21.985136 | orchestrator | 2026-04-11 03:59:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:21.985190 | orchestrator | 2026-04-11 03:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:25.035807 | orchestrator | 2026-04-11 03:59:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:25.037297 | orchestrator | 2026-04-11 03:59:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:25.037348 | orchestrator | 2026-04-11 03:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:28.077692 | orchestrator | 2026-04-11 03:59:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:28.079431 | orchestrator | 2026-04-11 03:59:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:28.079488 | orchestrator | 2026-04-11 03:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:31.125095 | orchestrator | 2026-04-11 03:59:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:31.125598 | orchestrator | 2026-04-11 03:59:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:31.125696 | orchestrator | 2026-04-11 03:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:34.167862 | orchestrator | 2026-04-11 03:59:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:34.169749 | orchestrator | 2026-04-11 03:59:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:34.169827 | orchestrator | 2026-04-11 03:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:37.218216 | orchestrator | 2026-04-11 03:59:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:37.219993 | orchestrator | 2026-04-11 03:59:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:37.220033 | orchestrator | 2026-04-11 03:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:40.266628 | orchestrator | 2026-04-11 03:59:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:40.268517 | orchestrator | 2026-04-11 03:59:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:40.269023 | orchestrator | 2026-04-11 03:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:43.322110 | orchestrator | 2026-04-11 03:59:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:43.324135 | orchestrator | 2026-04-11 03:59:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:43.324184 | orchestrator | 2026-04-11 03:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:46.375754 | orchestrator | 2026-04-11 03:59:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:46.378877 | orchestrator | 2026-04-11 03:59:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:46.378959 | orchestrator | 2026-04-11 03:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:49.422871 | orchestrator | 2026-04-11 03:59:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:49.425362 | orchestrator | 2026-04-11 03:59:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:49.425409 | orchestrator | 2026-04-11 03:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:52.460162 | orchestrator | 2026-04-11 03:59:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:52.461275 | orchestrator | 2026-04-11 03:59:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:52.461325 | orchestrator | 2026-04-11 03:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:55.504779 | orchestrator | 2026-04-11 03:59:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:55.505558 | orchestrator | 2026-04-11 03:59:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:55.505591 | orchestrator | 2026-04-11 03:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 03:59:58.560526 | orchestrator | 2026-04-11 03:59:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 03:59:58.562490 | orchestrator | 2026-04-11 03:59:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 03:59:58.562545 | orchestrator | 2026-04-11 03:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:01.613423 | orchestrator | 2026-04-11 04:00:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:01.616537 | orchestrator | 2026-04-11 04:00:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:01.617123 | orchestrator | 2026-04-11 04:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:04.664884 | orchestrator | 2026-04-11 04:00:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:04.666147 | orchestrator | 2026-04-11 04:00:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:04.666217 | orchestrator | 2026-04-11 04:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:07.715221 | orchestrator | 2026-04-11 04:00:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:07.716449 | orchestrator | 2026-04-11 04:00:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:07.716512 | orchestrator | 2026-04-11 04:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:10.767750 | orchestrator | 2026-04-11 04:00:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:10.771166 | orchestrator | 2026-04-11 04:00:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:10.771252 | orchestrator | 2026-04-11 04:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:13.820075 | orchestrator | 2026-04-11 04:00:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:13.821175 | orchestrator | 2026-04-11 04:00:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:13.821227 | orchestrator | 2026-04-11 04:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:16.873228 | orchestrator | 2026-04-11 04:00:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:16.875223 | orchestrator | 2026-04-11 04:00:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:16.875300 | orchestrator | 2026-04-11 04:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:19.931068 | orchestrator | 2026-04-11 04:00:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:19.932526 | orchestrator | 2026-04-11 04:00:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:19.932627 | orchestrator | 2026-04-11 04:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:22.978278 | orchestrator | 2026-04-11 04:00:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:22.980364 | orchestrator | 2026-04-11 04:00:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:22.980427 | orchestrator | 2026-04-11 04:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:26.031417 | orchestrator | 2026-04-11 04:00:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:26.032930 | orchestrator | 2026-04-11 04:00:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:26.033071 | orchestrator | 2026-04-11 04:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:29.075784 | orchestrator | 2026-04-11 04:00:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:29.077601 | orchestrator | 2026-04-11 04:00:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:29.077657 | orchestrator | 2026-04-11 04:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:32.128056 | orchestrator | 2026-04-11 04:00:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:32.129058 | orchestrator | 2026-04-11 04:00:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:32.129110 | orchestrator | 2026-04-11 04:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:35.168979 | orchestrator | 2026-04-11 04:00:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:35.169692 | orchestrator | 2026-04-11 04:00:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:35.169745 | orchestrator | 2026-04-11 04:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:38.215983 | orchestrator | 2026-04-11 04:00:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:38.221143 | orchestrator | 2026-04-11 04:00:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:38.221207 | orchestrator | 2026-04-11 04:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:41.267788 | orchestrator | 2026-04-11 04:00:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:41.269243 | orchestrator | 2026-04-11 04:00:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:41.269297 | orchestrator | 2026-04-11 04:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:44.314858 | orchestrator | 2026-04-11 04:00:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:44.316121 | orchestrator | 2026-04-11 04:00:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:44.316255 | orchestrator | 2026-04-11 04:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:47.366415 | orchestrator | 2026-04-11 04:00:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:47.369114 | orchestrator | 2026-04-11 04:00:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:47.369220 | orchestrator | 2026-04-11 04:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:50.409631 | orchestrator | 2026-04-11 04:00:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:50.412046 | orchestrator | 2026-04-11 04:00:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:50.412171 | orchestrator | 2026-04-11 04:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:53.467517 | orchestrator | 2026-04-11 04:00:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:53.469105 | orchestrator | 2026-04-11 04:00:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:53.469157 | orchestrator | 2026-04-11 04:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:56.505950 | orchestrator | 2026-04-11 04:00:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:56.507170 | orchestrator | 2026-04-11 04:00:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:56.507198 | orchestrator | 2026-04-11 04:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:00:59.554886 | orchestrator | 2026-04-11 04:00:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:00:59.555504 | orchestrator | 2026-04-11 04:00:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:00:59.555630 | orchestrator | 2026-04-11 04:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:02.597611 | orchestrator | 2026-04-11 04:01:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:02.598778 | orchestrator | 2026-04-11 04:01:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:02.598909 | orchestrator | 2026-04-11 04:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:05.644419 | orchestrator | 2026-04-11 04:01:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:05.646249 | orchestrator | 2026-04-11 04:01:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:05.646311 | orchestrator | 2026-04-11 04:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:08.695397 | orchestrator | 2026-04-11 04:01:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:08.697018 | orchestrator | 2026-04-11 04:01:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:08.697124 | orchestrator | 2026-04-11 04:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:11.750904 | orchestrator | 2026-04-11 04:01:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:11.754231 | orchestrator | 2026-04-11 04:01:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:11.754310 | orchestrator | 2026-04-11 04:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:14.805277 | orchestrator | 2026-04-11 04:01:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:14.806851 | orchestrator | 2026-04-11 04:01:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:14.806976 | orchestrator | 2026-04-11 04:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:17.854576 | orchestrator | 2026-04-11 04:01:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:17.856617 | orchestrator | 2026-04-11 04:01:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:17.856666 | orchestrator | 2026-04-11 04:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:20.904182 | orchestrator | 2026-04-11 04:01:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:20.904336 | orchestrator | 2026-04-11 04:01:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:20.904352 | orchestrator | 2026-04-11 04:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:23.947114 | orchestrator | 2026-04-11 04:01:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:23.949172 | orchestrator | 2026-04-11 04:01:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:23.949246 | orchestrator | 2026-04-11 04:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:26.998267 | orchestrator | 2026-04-11 04:01:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:27.001006 | orchestrator | 2026-04-11 04:01:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:27.001104 | orchestrator | 2026-04-11 04:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:30.053837 | orchestrator | 2026-04-11 04:01:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:30.055506 | orchestrator | 2026-04-11 04:01:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:30.055552 | orchestrator | 2026-04-11 04:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:33.092928 | orchestrator | 2026-04-11 04:01:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:33.094907 | orchestrator | 2026-04-11 04:01:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:33.095001 | orchestrator | 2026-04-11 04:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:36.135067 | orchestrator | 2026-04-11 04:01:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:36.136808 | orchestrator | 2026-04-11 04:01:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:36.136893 | orchestrator | 2026-04-11 04:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:39.179384 | orchestrator | 2026-04-11 04:01:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:39.180662 | orchestrator | 2026-04-11 04:01:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:39.180728 | orchestrator | 2026-04-11 04:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:42.216847 | orchestrator | 2026-04-11 04:01:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:42.217111 | orchestrator | 2026-04-11 04:01:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:42.217136 | orchestrator | 2026-04-11 04:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:45.257389 | orchestrator | 2026-04-11 04:01:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:45.259802 | orchestrator | 2026-04-11 04:01:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:45.259932 | orchestrator | 2026-04-11 04:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:48.309248 | orchestrator | 2026-04-11 04:01:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:48.311219 | orchestrator | 2026-04-11 04:01:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:48.311492 | orchestrator | 2026-04-11 04:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:51.359992 | orchestrator | 2026-04-11 04:01:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:51.365248 | orchestrator | 2026-04-11 04:01:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:51.365554 | orchestrator | 2026-04-11 04:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:54.401451 | orchestrator | 2026-04-11 04:01:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:54.401878 | orchestrator | 2026-04-11 04:01:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:54.401901 | orchestrator | 2026-04-11 04:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:01:57.446922 | orchestrator | 2026-04-11 04:01:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:01:57.448300 | orchestrator | 2026-04-11 04:01:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:01:57.448451 | orchestrator | 2026-04-11 04:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:00.490059 | orchestrator | 2026-04-11 04:02:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:00.491493 | orchestrator | 2026-04-11 04:02:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:00.491559 | orchestrator | 2026-04-11 04:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:03.540462 | orchestrator | 2026-04-11 04:02:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:03.542505 | orchestrator | 2026-04-11 04:02:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:03.542595 | orchestrator | 2026-04-11 04:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:06.587703 | orchestrator | 2026-04-11 04:02:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:06.589427 | orchestrator | 2026-04-11 04:02:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:06.589477 | orchestrator | 2026-04-11 04:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:09.633858 | orchestrator | 2026-04-11 04:02:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:09.635640 | orchestrator | 2026-04-11 04:02:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:09.635722 | orchestrator | 2026-04-11 04:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:12.688534 | orchestrator | 2026-04-11 04:02:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:12.690317 | orchestrator | 2026-04-11 04:02:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:12.690354 | orchestrator | 2026-04-11 04:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:15.736875 | orchestrator | 2026-04-11 04:02:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:15.738280 | orchestrator | 2026-04-11 04:02:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:15.738345 | orchestrator | 2026-04-11 04:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:18.783370 | orchestrator | 2026-04-11 04:02:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:18.785516 | orchestrator | 2026-04-11 04:02:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:18.785621 | orchestrator | 2026-04-11 04:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:21.828052 | orchestrator | 2026-04-11 04:02:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:21.830130 | orchestrator | 2026-04-11 04:02:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:21.830241 | orchestrator | 2026-04-11 04:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:24.878508 | orchestrator | 2026-04-11 04:02:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:24.880471 | orchestrator | 2026-04-11 04:02:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:24.880567 | orchestrator | 2026-04-11 04:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:27.932268 | orchestrator | 2026-04-11 04:02:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:27.932477 | orchestrator | 2026-04-11 04:02:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:27.932502 | orchestrator | 2026-04-11 04:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:30.982479 | orchestrator | 2026-04-11 04:02:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:30.983751 | orchestrator | 2026-04-11 04:02:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:30.983827 | orchestrator | 2026-04-11 04:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:34.032366 | orchestrator | 2026-04-11 04:02:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:34.033901 | orchestrator | 2026-04-11 04:02:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:34.033948 | orchestrator | 2026-04-11 04:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:37.076776 | orchestrator | 2026-04-11 04:02:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:37.077920 | orchestrator | 2026-04-11 04:02:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:37.078011 | orchestrator | 2026-04-11 04:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:40.132173 | orchestrator | 2026-04-11 04:02:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:40.133714 | orchestrator | 2026-04-11 04:02:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:40.133745 | orchestrator | 2026-04-11 04:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:43.184007 | orchestrator | 2026-04-11 04:02:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:43.185266 | orchestrator | 2026-04-11 04:02:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:43.185349 | orchestrator | 2026-04-11 04:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:46.232133 | orchestrator | 2026-04-11 04:02:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:46.233590 | orchestrator | 2026-04-11 04:02:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:46.233627 | orchestrator | 2026-04-11 04:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:49.276488 | orchestrator | 2026-04-11 04:02:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:49.277889 | orchestrator | 2026-04-11 04:02:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:49.277947 | orchestrator | 2026-04-11 04:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:52.319587 | orchestrator | 2026-04-11 04:02:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:52.320264 | orchestrator | 2026-04-11 04:02:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:52.320393 | orchestrator | 2026-04-11 04:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:55.369037 | orchestrator | 2026-04-11 04:02:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:55.370636 | orchestrator | 2026-04-11 04:02:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:55.370710 | orchestrator | 2026-04-11 04:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:02:58.414243 | orchestrator | 2026-04-11 04:02:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:02:58.415963 | orchestrator | 2026-04-11 04:02:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:02:58.416017 | orchestrator | 2026-04-11 04:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:01.464960 | orchestrator | 2026-04-11 04:03:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:01.466243 | orchestrator | 2026-04-11 04:03:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:01.466325 | orchestrator | 2026-04-11 04:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:04.514940 | orchestrator | 2026-04-11 04:03:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:04.517226 | orchestrator | 2026-04-11 04:03:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:04.517293 | orchestrator | 2026-04-11 04:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:07.559623 | orchestrator | 2026-04-11 04:03:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:07.562208 | orchestrator | 2026-04-11 04:03:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:07.562294 | orchestrator | 2026-04-11 04:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:10.611665 | orchestrator | 2026-04-11 04:03:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:10.614722 | orchestrator | 2026-04-11 04:03:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:10.614845 | orchestrator | 2026-04-11 04:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:13.664115 | orchestrator | 2026-04-11 04:03:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:13.666158 | orchestrator | 2026-04-11 04:03:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:13.666267 | orchestrator | 2026-04-11 04:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:16.713778 | orchestrator | 2026-04-11 04:03:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:16.715017 | orchestrator | 2026-04-11 04:03:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:16.715062 | orchestrator | 2026-04-11 04:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:19.752453 | orchestrator | 2026-04-11 04:03:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:19.753068 | orchestrator | 2026-04-11 04:03:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:19.753109 | orchestrator | 2026-04-11 04:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:22.788416 | orchestrator | 2026-04-11 04:03:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:22.789090 | orchestrator | 2026-04-11 04:03:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:22.789126 | orchestrator | 2026-04-11 04:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:25.825847 | orchestrator | 2026-04-11 04:03:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:25.826564 | orchestrator | 2026-04-11 04:03:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:25.826598 | orchestrator | 2026-04-11 04:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:28.871428 | orchestrator | 2026-04-11 04:03:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:28.872986 | orchestrator | 2026-04-11 04:03:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:28.873032 | orchestrator | 2026-04-11 04:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:31.915950 | orchestrator | 2026-04-11 04:03:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:31.918096 | orchestrator | 2026-04-11 04:03:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:31.918179 | orchestrator | 2026-04-11 04:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:34.966991 | orchestrator | 2026-04-11 04:03:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:34.970221 | orchestrator | 2026-04-11 04:03:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:34.970263 | orchestrator | 2026-04-11 04:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:38.024769 | orchestrator | 2026-04-11 04:03:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:38.027011 | orchestrator | 2026-04-11 04:03:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:38.027092 | orchestrator | 2026-04-11 04:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:41.072844 | orchestrator | 2026-04-11 04:03:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:41.074222 | orchestrator | 2026-04-11 04:03:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:41.074268 | orchestrator | 2026-04-11 04:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:44.120662 | orchestrator | 2026-04-11 04:03:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:44.122444 | orchestrator | 2026-04-11 04:03:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:44.122518 | orchestrator | 2026-04-11 04:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:47.177159 | orchestrator | 2026-04-11 04:03:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:47.179344 | orchestrator | 2026-04-11 04:03:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:47.179932 | orchestrator | 2026-04-11 04:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:50.249812 | orchestrator | 2026-04-11 04:03:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:50.251412 | orchestrator | 2026-04-11 04:03:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:50.252310 | orchestrator | 2026-04-11 04:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:53.292330 | orchestrator | 2026-04-11 04:03:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:53.292746 | orchestrator | 2026-04-11 04:03:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:53.292783 | orchestrator | 2026-04-11 04:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:56.334758 | orchestrator | 2026-04-11 04:03:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:56.336724 | orchestrator | 2026-04-11 04:03:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:56.336775 | orchestrator | 2026-04-11 04:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:03:59.389078 | orchestrator | 2026-04-11 04:03:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:03:59.391295 | orchestrator | 2026-04-11 04:03:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:03:59.391347 | orchestrator | 2026-04-11 04:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:02.438552 | orchestrator | 2026-04-11 04:04:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:02.440554 | orchestrator | 2026-04-11 04:04:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:02.440597 | orchestrator | 2026-04-11 04:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:05.489073 | orchestrator | 2026-04-11 04:04:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:05.489384 | orchestrator | 2026-04-11 04:04:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:05.489419 | orchestrator | 2026-04-11 04:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:08.536086 | orchestrator | 2026-04-11 04:04:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:08.537678 | orchestrator | 2026-04-11 04:04:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:08.537715 | orchestrator | 2026-04-11 04:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:11.584828 | orchestrator | 2026-04-11 04:04:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:11.585582 | orchestrator | 2026-04-11 04:04:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:11.585713 | orchestrator | 2026-04-11 04:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:14.637087 | orchestrator | 2026-04-11 04:04:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:14.638412 | orchestrator | 2026-04-11 04:04:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:14.638471 | orchestrator | 2026-04-11 04:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:17.692419 | orchestrator | 2026-04-11 04:04:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:17.693862 | orchestrator | 2026-04-11 04:04:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:17.693946 | orchestrator | 2026-04-11 04:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:20.739891 | orchestrator | 2026-04-11 04:04:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:20.742215 | orchestrator | 2026-04-11 04:04:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:20.742273 | orchestrator | 2026-04-11 04:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:23.792671 | orchestrator | 2026-04-11 04:04:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:23.794688 | orchestrator | 2026-04-11 04:04:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:23.794758 | orchestrator | 2026-04-11 04:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:26.837496 | orchestrator | 2026-04-11 04:04:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:26.839477 | orchestrator | 2026-04-11 04:04:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:26.839530 | orchestrator | 2026-04-11 04:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:29.889042 | orchestrator | 2026-04-11 04:04:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:29.890477 | orchestrator | 2026-04-11 04:04:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:29.890536 | orchestrator | 2026-04-11 04:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:32.933197 | orchestrator | 2026-04-11 04:04:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:32.934305 | orchestrator | 2026-04-11 04:04:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:32.934433 | orchestrator | 2026-04-11 04:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:35.980254 | orchestrator | 2026-04-11 04:04:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:35.981532 | orchestrator | 2026-04-11 04:04:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:35.981649 | orchestrator | 2026-04-11 04:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:39.032417 | orchestrator | 2026-04-11 04:04:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:39.033491 | orchestrator | 2026-04-11 04:04:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:39.033661 | orchestrator | 2026-04-11 04:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:42.074057 | orchestrator | 2026-04-11 04:04:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:42.075157 | orchestrator | 2026-04-11 04:04:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:42.075201 | orchestrator | 2026-04-11 04:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:45.105921 | orchestrator | 2026-04-11 04:04:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:45.106585 | orchestrator | 2026-04-11 04:04:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:45.106671 | orchestrator | 2026-04-11 04:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:48.158758 | orchestrator | 2026-04-11 04:04:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:48.160567 | orchestrator | 2026-04-11 04:04:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:48.160702 | orchestrator | 2026-04-11 04:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:51.212704 | orchestrator | 2026-04-11 04:04:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:51.214511 | orchestrator | 2026-04-11 04:04:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:51.214595 | orchestrator | 2026-04-11 04:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:54.261019 | orchestrator | 2026-04-11 04:04:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:54.263940 | orchestrator | 2026-04-11 04:04:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:54.264010 | orchestrator | 2026-04-11 04:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:04:57.311467 | orchestrator | 2026-04-11 04:04:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:04:57.312708 | orchestrator | 2026-04-11 04:04:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:04:57.312755 | orchestrator | 2026-04-11 04:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:00.370563 | orchestrator | 2026-04-11 04:05:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:00.371090 | orchestrator | 2026-04-11 04:05:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:00.371115 | orchestrator | 2026-04-11 04:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:03.417742 | orchestrator | 2026-04-11 04:05:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:03.420911 | orchestrator | 2026-04-11 04:05:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:03.421040 | orchestrator | 2026-04-11 04:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:06.466668 | orchestrator | 2026-04-11 04:05:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:06.468537 | orchestrator | 2026-04-11 04:05:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:06.468581 | orchestrator | 2026-04-11 04:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:09.519682 | orchestrator | 2026-04-11 04:05:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:09.521287 | orchestrator | 2026-04-11 04:05:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:09.521385 | orchestrator | 2026-04-11 04:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:12.561007 | orchestrator | 2026-04-11 04:05:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:12.562258 | orchestrator | 2026-04-11 04:05:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:12.562303 | orchestrator | 2026-04-11 04:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:15.605800 | orchestrator | 2026-04-11 04:05:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:15.608718 | orchestrator | 2026-04-11 04:05:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:15.608898 | orchestrator | 2026-04-11 04:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:18.654410 | orchestrator | 2026-04-11 04:05:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:18.654998 | orchestrator | 2026-04-11 04:05:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:18.655014 | orchestrator | 2026-04-11 04:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:21.707780 | orchestrator | 2026-04-11 04:05:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:21.707988 | orchestrator | 2026-04-11 04:05:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:21.708012 | orchestrator | 2026-04-11 04:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:24.749965 | orchestrator | 2026-04-11 04:05:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:24.752268 | orchestrator | 2026-04-11 04:05:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:24.752337 | orchestrator | 2026-04-11 04:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:27.795510 | orchestrator | 2026-04-11 04:05:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:27.797230 | orchestrator | 2026-04-11 04:05:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:27.797292 | orchestrator | 2026-04-11 04:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:30.846171 | orchestrator | 2026-04-11 04:05:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:30.847275 | orchestrator | 2026-04-11 04:05:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:30.847342 | orchestrator | 2026-04-11 04:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:33.891653 | orchestrator | 2026-04-11 04:05:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:33.893605 | orchestrator | 2026-04-11 04:05:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:33.893687 | orchestrator | 2026-04-11 04:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:36.945690 | orchestrator | 2026-04-11 04:05:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:36.946563 | orchestrator | 2026-04-11 04:05:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:36.946626 | orchestrator | 2026-04-11 04:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:39.988039 | orchestrator | 2026-04-11 04:05:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:39.990315 | orchestrator | 2026-04-11 04:05:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:39.990366 | orchestrator | 2026-04-11 04:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:43.045232 | orchestrator | 2026-04-11 04:05:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:43.046096 | orchestrator | 2026-04-11 04:05:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:43.046129 | orchestrator | 2026-04-11 04:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:46.087170 | orchestrator | 2026-04-11 04:05:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:46.088569 | orchestrator | 2026-04-11 04:05:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:46.088622 | orchestrator | 2026-04-11 04:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:49.138226 | orchestrator | 2026-04-11 04:05:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:49.140952 | orchestrator | 2026-04-11 04:05:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:49.141034 | orchestrator | 2026-04-11 04:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:52.193099 | orchestrator | 2026-04-11 04:05:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:52.195480 | orchestrator | 2026-04-11 04:05:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:52.195784 | orchestrator | 2026-04-11 04:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:55.246644 | orchestrator | 2026-04-11 04:05:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:55.247897 | orchestrator | 2026-04-11 04:05:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:55.247997 | orchestrator | 2026-04-11 04:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:05:58.289099 | orchestrator | 2026-04-11 04:05:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:05:58.290818 | orchestrator | 2026-04-11 04:05:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:05:58.290889 | orchestrator | 2026-04-11 04:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:01.338750 | orchestrator | 2026-04-11 04:06:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:01.340098 | orchestrator | 2026-04-11 04:06:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:01.340174 | orchestrator | 2026-04-11 04:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:04.385122 | orchestrator | 2026-04-11 04:06:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:04.388087 | orchestrator | 2026-04-11 04:06:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:04.388168 | orchestrator | 2026-04-11 04:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:07.439980 | orchestrator | 2026-04-11 04:06:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:07.442838 | orchestrator | 2026-04-11 04:06:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:07.442915 | orchestrator | 2026-04-11 04:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:10.486178 | orchestrator | 2026-04-11 04:06:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:10.487065 | orchestrator | 2026-04-11 04:06:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:10.487117 | orchestrator | 2026-04-11 04:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:13.536738 | orchestrator | 2026-04-11 04:06:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:13.538256 | orchestrator | 2026-04-11 04:06:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:13.538342 | orchestrator | 2026-04-11 04:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:16.583518 | orchestrator | 2026-04-11 04:06:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:16.585669 | orchestrator | 2026-04-11 04:06:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:16.585743 | orchestrator | 2026-04-11 04:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:19.627317 | orchestrator | 2026-04-11 04:06:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:19.629078 | orchestrator | 2026-04-11 04:06:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:19.629175 | orchestrator | 2026-04-11 04:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:22.674509 | orchestrator | 2026-04-11 04:06:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:22.676383 | orchestrator | 2026-04-11 04:06:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:22.676444 | orchestrator | 2026-04-11 04:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:25.719900 | orchestrator | 2026-04-11 04:06:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:25.721710 | orchestrator | 2026-04-11 04:06:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:25.721797 | orchestrator | 2026-04-11 04:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:28.764982 | orchestrator | 2026-04-11 04:06:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:28.766454 | orchestrator | 2026-04-11 04:06:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:28.766711 | orchestrator | 2026-04-11 04:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:31.818313 | orchestrator | 2026-04-11 04:06:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:31.820091 | orchestrator | 2026-04-11 04:06:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:31.820130 | orchestrator | 2026-04-11 04:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:34.855682 | orchestrator | 2026-04-11 04:06:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:34.856783 | orchestrator | 2026-04-11 04:06:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:34.856835 | orchestrator | 2026-04-11 04:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:37.901325 | orchestrator | 2026-04-11 04:06:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:37.903185 | orchestrator | 2026-04-11 04:06:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:37.903239 | orchestrator | 2026-04-11 04:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:40.956623 | orchestrator | 2026-04-11 04:06:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:40.957700 | orchestrator | 2026-04-11 04:06:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:40.957744 | orchestrator | 2026-04-11 04:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:44.009128 | orchestrator | 2026-04-11 04:06:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:44.011808 | orchestrator | 2026-04-11 04:06:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:44.011935 | orchestrator | 2026-04-11 04:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:47.054106 | orchestrator | 2026-04-11 04:06:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:47.055933 | orchestrator | 2026-04-11 04:06:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:47.055992 | orchestrator | 2026-04-11 04:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:50.094746 | orchestrator | 2026-04-11 04:06:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:50.096087 | orchestrator | 2026-04-11 04:06:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:50.096128 | orchestrator | 2026-04-11 04:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:53.137018 | orchestrator | 2026-04-11 04:06:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:53.137963 | orchestrator | 2026-04-11 04:06:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:53.138069 | orchestrator | 2026-04-11 04:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:56.177906 | orchestrator | 2026-04-11 04:06:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:56.178752 | orchestrator | 2026-04-11 04:06:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:56.178777 | orchestrator | 2026-04-11 04:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:06:59.223175 | orchestrator | 2026-04-11 04:06:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:06:59.225054 | orchestrator | 2026-04-11 04:06:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:06:59.225100 | orchestrator | 2026-04-11 04:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:02.273735 | orchestrator | 2026-04-11 04:07:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:02.276492 | orchestrator | 2026-04-11 04:07:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:02.276625 | orchestrator | 2026-04-11 04:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:05.323995 | orchestrator | 2026-04-11 04:07:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:05.326130 | orchestrator | 2026-04-11 04:07:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:05.326195 | orchestrator | 2026-04-11 04:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:08.383393 | orchestrator | 2026-04-11 04:07:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:08.384695 | orchestrator | 2026-04-11 04:07:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:08.385065 | orchestrator | 2026-04-11 04:07:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:11.427337 | orchestrator | 2026-04-11 04:07:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:11.429397 | orchestrator | 2026-04-11 04:07:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:11.429471 | orchestrator | 2026-04-11 04:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:14.477112 | orchestrator | 2026-04-11 04:07:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:14.478214 | orchestrator | 2026-04-11 04:07:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:14.478266 | orchestrator | 2026-04-11 04:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:17.528589 | orchestrator | 2026-04-11 04:07:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:17.531208 | orchestrator | 2026-04-11 04:07:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:17.531260 | orchestrator | 2026-04-11 04:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:20.579408 | orchestrator | 2026-04-11 04:07:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:20.580095 | orchestrator | 2026-04-11 04:07:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:20.580117 | orchestrator | 2026-04-11 04:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:23.634339 | orchestrator | 2026-04-11 04:07:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:23.636154 | orchestrator | 2026-04-11 04:07:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:23.636191 | orchestrator | 2026-04-11 04:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:26.682700 | orchestrator | 2026-04-11 04:07:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:26.683678 | orchestrator | 2026-04-11 04:07:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:26.683747 | orchestrator | 2026-04-11 04:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:29.728307 | orchestrator | 2026-04-11 04:07:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:29.730185 | orchestrator | 2026-04-11 04:07:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:29.730259 | orchestrator | 2026-04-11 04:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:32.775430 | orchestrator | 2026-04-11 04:07:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:32.777026 | orchestrator | 2026-04-11 04:07:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:32.777091 | orchestrator | 2026-04-11 04:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:35.824209 | orchestrator | 2026-04-11 04:07:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:35.825590 | orchestrator | 2026-04-11 04:07:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:35.825626 | orchestrator | 2026-04-11 04:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:38.878345 | orchestrator | 2026-04-11 04:07:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:38.879558 | orchestrator | 2026-04-11 04:07:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:38.879594 | orchestrator | 2026-04-11 04:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:41.925142 | orchestrator | 2026-04-11 04:07:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:41.927844 | orchestrator | 2026-04-11 04:07:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:41.927897 | orchestrator | 2026-04-11 04:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:44.970877 | orchestrator | 2026-04-11 04:07:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:44.971214 | orchestrator | 2026-04-11 04:07:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:44.971241 | orchestrator | 2026-04-11 04:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:48.014740 | orchestrator | 2026-04-11 04:07:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:48.016227 | orchestrator | 2026-04-11 04:07:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:48.016265 | orchestrator | 2026-04-11 04:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:51.061424 | orchestrator | 2026-04-11 04:07:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:51.062649 | orchestrator | 2026-04-11 04:07:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:51.062696 | orchestrator | 2026-04-11 04:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:54.104051 | orchestrator | 2026-04-11 04:07:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:54.105524 | orchestrator | 2026-04-11 04:07:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:54.105564 | orchestrator | 2026-04-11 04:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:07:57.152667 | orchestrator | 2026-04-11 04:07:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:07:57.153720 | orchestrator | 2026-04-11 04:07:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:07:57.153816 | orchestrator | 2026-04-11 04:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:00.198434 | orchestrator | 2026-04-11 04:08:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:00.200601 | orchestrator | 2026-04-11 04:08:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:00.200711 | orchestrator | 2026-04-11 04:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:03.236575 | orchestrator | 2026-04-11 04:08:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:03.236856 | orchestrator | 2026-04-11 04:08:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:03.236886 | orchestrator | 2026-04-11 04:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:06.276850 | orchestrator | 2026-04-11 04:08:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:06.278469 | orchestrator | 2026-04-11 04:08:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:06.278601 | orchestrator | 2026-04-11 04:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:09.324816 | orchestrator | 2026-04-11 04:08:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:09.326336 | orchestrator | 2026-04-11 04:08:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:09.326401 | orchestrator | 2026-04-11 04:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:12.371226 | orchestrator | 2026-04-11 04:08:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:12.373074 | orchestrator | 2026-04-11 04:08:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:12.373173 | orchestrator | 2026-04-11 04:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:15.423401 | orchestrator | 2026-04-11 04:08:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:15.424367 | orchestrator | 2026-04-11 04:08:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:15.424428 | orchestrator | 2026-04-11 04:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:18.470842 | orchestrator | 2026-04-11 04:08:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:18.472330 | orchestrator | 2026-04-11 04:08:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:18.472418 | orchestrator | 2026-04-11 04:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:21.513469 | orchestrator | 2026-04-11 04:08:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:21.515351 | orchestrator | 2026-04-11 04:08:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:21.515433 | orchestrator | 2026-04-11 04:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:24.554387 | orchestrator | 2026-04-11 04:08:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:24.555714 | orchestrator | 2026-04-11 04:08:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:24.555761 | orchestrator | 2026-04-11 04:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:27.595880 | orchestrator | 2026-04-11 04:08:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:27.596633 | orchestrator | 2026-04-11 04:08:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:27.596680 | orchestrator | 2026-04-11 04:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:30.650336 | orchestrator | 2026-04-11 04:08:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:30.652173 | orchestrator | 2026-04-11 04:08:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:30.652218 | orchestrator | 2026-04-11 04:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:33.699248 | orchestrator | 2026-04-11 04:08:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:33.702803 | orchestrator | 2026-04-11 04:08:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:33.702947 | orchestrator | 2026-04-11 04:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:36.748126 | orchestrator | 2026-04-11 04:08:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:36.749269 | orchestrator | 2026-04-11 04:08:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:36.749307 | orchestrator | 2026-04-11 04:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:39.797404 | orchestrator | 2026-04-11 04:08:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:39.798133 | orchestrator | 2026-04-11 04:08:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:39.798174 | orchestrator | 2026-04-11 04:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:42.842909 | orchestrator | 2026-04-11 04:08:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:42.845018 | orchestrator | 2026-04-11 04:08:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:42.845059 | orchestrator | 2026-04-11 04:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:45.886678 | orchestrator | 2026-04-11 04:08:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:45.888931 | orchestrator | 2026-04-11 04:08:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:45.889029 | orchestrator | 2026-04-11 04:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:48.937996 | orchestrator | 2026-04-11 04:08:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:48.939858 | orchestrator | 2026-04-11 04:08:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:48.939907 | orchestrator | 2026-04-11 04:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:51.990057 | orchestrator | 2026-04-11 04:08:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:51.992807 | orchestrator | 2026-04-11 04:08:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:51.992894 | orchestrator | 2026-04-11 04:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:55.043756 | orchestrator | 2026-04-11 04:08:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:55.046090 | orchestrator | 2026-04-11 04:08:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:55.046164 | orchestrator | 2026-04-11 04:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:08:58.086478 | orchestrator | 2026-04-11 04:08:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:08:58.087444 | orchestrator | 2026-04-11 04:08:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:08:58.087536 | orchestrator | 2026-04-11 04:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:01.124629 | orchestrator | 2026-04-11 04:09:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:01.126517 | orchestrator | 2026-04-11 04:09:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:01.126775 | orchestrator | 2026-04-11 04:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:04.169014 | orchestrator | 2026-04-11 04:09:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:04.169661 | orchestrator | 2026-04-11 04:09:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:04.169690 | orchestrator | 2026-04-11 04:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:07.215586 | orchestrator | 2026-04-11 04:09:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:07.217055 | orchestrator | 2026-04-11 04:09:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:07.217168 | orchestrator | 2026-04-11 04:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:10.267028 | orchestrator | 2026-04-11 04:09:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:10.268545 | orchestrator | 2026-04-11 04:09:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:10.268616 | orchestrator | 2026-04-11 04:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:13.319047 | orchestrator | 2026-04-11 04:09:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:13.322164 | orchestrator | 2026-04-11 04:09:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:13.322232 | orchestrator | 2026-04-11 04:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:16.371102 | orchestrator | 2026-04-11 04:09:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:16.372572 | orchestrator | 2026-04-11 04:09:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:16.372640 | orchestrator | 2026-04-11 04:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:19.425574 | orchestrator | 2026-04-11 04:09:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:19.429105 | orchestrator | 2026-04-11 04:09:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:19.429169 | orchestrator | 2026-04-11 04:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:22.482249 | orchestrator | 2026-04-11 04:09:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:22.483547 | orchestrator | 2026-04-11 04:09:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:22.483602 | orchestrator | 2026-04-11 04:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:25.528928 | orchestrator | 2026-04-11 04:09:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:25.529661 | orchestrator | 2026-04-11 04:09:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:25.529692 | orchestrator | 2026-04-11 04:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:28.582561 | orchestrator | 2026-04-11 04:09:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:28.584962 | orchestrator | 2026-04-11 04:09:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:28.585051 | orchestrator | 2026-04-11 04:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:31.633370 | orchestrator | 2026-04-11 04:09:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:31.635327 | orchestrator | 2026-04-11 04:09:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:31.635383 | orchestrator | 2026-04-11 04:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:34.683793 | orchestrator | 2026-04-11 04:09:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:34.686115 | orchestrator | 2026-04-11 04:09:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:34.686165 | orchestrator | 2026-04-11 04:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:37.727618 | orchestrator | 2026-04-11 04:09:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:37.729694 | orchestrator | 2026-04-11 04:09:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:37.729761 | orchestrator | 2026-04-11 04:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:40.771043 | orchestrator | 2026-04-11 04:09:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:40.772545 | orchestrator | 2026-04-11 04:09:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:40.772657 | orchestrator | 2026-04-11 04:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:43.829673 | orchestrator | 2026-04-11 04:09:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:43.832181 | orchestrator | 2026-04-11 04:09:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:43.832284 | orchestrator | 2026-04-11 04:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:46.883459 | orchestrator | 2026-04-11 04:09:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:46.885013 | orchestrator | 2026-04-11 04:09:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:46.885065 | orchestrator | 2026-04-11 04:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:49.935794 | orchestrator | 2026-04-11 04:09:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:49.937285 | orchestrator | 2026-04-11 04:09:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:49.937523 | orchestrator | 2026-04-11 04:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:52.998084 | orchestrator | 2026-04-11 04:09:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:53.001131 | orchestrator | 2026-04-11 04:09:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:53.001347 | orchestrator | 2026-04-11 04:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:56.058547 | orchestrator | 2026-04-11 04:09:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:56.060183 | orchestrator | 2026-04-11 04:09:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:56.061066 | orchestrator | 2026-04-11 04:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:09:59.105980 | orchestrator | 2026-04-11 04:09:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:09:59.108742 | orchestrator | 2026-04-11 04:09:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:09:59.108812 | orchestrator | 2026-04-11 04:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:02.156604 | orchestrator | 2026-04-11 04:10:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:02.157384 | orchestrator | 2026-04-11 04:10:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:02.157450 | orchestrator | 2026-04-11 04:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:05.209573 | orchestrator | 2026-04-11 04:10:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:05.212405 | orchestrator | 2026-04-11 04:10:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:05.212550 | orchestrator | 2026-04-11 04:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:08.261227 | orchestrator | 2026-04-11 04:10:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:08.262726 | orchestrator | 2026-04-11 04:10:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:08.262763 | orchestrator | 2026-04-11 04:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:11.314270 | orchestrator | 2026-04-11 04:10:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:11.316791 | orchestrator | 2026-04-11 04:10:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:11.316937 | orchestrator | 2026-04-11 04:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:14.367078 | orchestrator | 2026-04-11 04:10:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:14.369039 | orchestrator | 2026-04-11 04:10:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:14.369104 | orchestrator | 2026-04-11 04:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:17.414708 | orchestrator | 2026-04-11 04:10:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:17.416506 | orchestrator | 2026-04-11 04:10:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:17.416562 | orchestrator | 2026-04-11 04:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:20.468747 | orchestrator | 2026-04-11 04:10:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:20.471936 | orchestrator | 2026-04-11 04:10:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:20.472023 | orchestrator | 2026-04-11 04:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:23.526776 | orchestrator | 2026-04-11 04:10:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:23.529015 | orchestrator | 2026-04-11 04:10:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:23.529056 | orchestrator | 2026-04-11 04:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:26.578631 | orchestrator | 2026-04-11 04:10:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:26.581352 | orchestrator | 2026-04-11 04:10:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:26.581453 | orchestrator | 2026-04-11 04:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:29.633107 | orchestrator | 2026-04-11 04:10:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:29.634972 | orchestrator | 2026-04-11 04:10:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:29.635042 | orchestrator | 2026-04-11 04:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:32.688893 | orchestrator | 2026-04-11 04:10:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:32.691169 | orchestrator | 2026-04-11 04:10:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:32.691212 | orchestrator | 2026-04-11 04:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:35.729674 | orchestrator | 2026-04-11 04:10:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:35.731257 | orchestrator | 2026-04-11 04:10:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:35.731328 | orchestrator | 2026-04-11 04:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:38.778086 | orchestrator | 2026-04-11 04:10:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:38.780883 | orchestrator | 2026-04-11 04:10:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:38.781102 | orchestrator | 2026-04-11 04:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:41.830004 | orchestrator | 2026-04-11 04:10:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:41.834902 | orchestrator | 2026-04-11 04:10:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:41.834993 | orchestrator | 2026-04-11 04:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:44.888919 | orchestrator | 2026-04-11 04:10:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:44.891776 | orchestrator | 2026-04-11 04:10:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:44.891844 | orchestrator | 2026-04-11 04:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:47.937334 | orchestrator | 2026-04-11 04:10:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:47.938449 | orchestrator | 2026-04-11 04:10:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:47.938537 | orchestrator | 2026-04-11 04:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:50.978708 | orchestrator | 2026-04-11 04:10:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:50.981617 | orchestrator | 2026-04-11 04:10:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:50.981799 | orchestrator | 2026-04-11 04:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:54.025935 | orchestrator | 2026-04-11 04:10:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:54.026771 | orchestrator | 2026-04-11 04:10:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:54.026838 | orchestrator | 2026-04-11 04:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:10:57.079692 | orchestrator | 2026-04-11 04:10:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:10:57.081789 | orchestrator | 2026-04-11 04:10:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:10:57.081839 | orchestrator | 2026-04-11 04:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:00.126304 | orchestrator | 2026-04-11 04:11:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:00.127976 | orchestrator | 2026-04-11 04:11:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:00.128019 | orchestrator | 2026-04-11 04:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:03.178113 | orchestrator | 2026-04-11 04:11:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:03.180011 | orchestrator | 2026-04-11 04:11:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:03.180074 | orchestrator | 2026-04-11 04:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:06.235852 | orchestrator | 2026-04-11 04:11:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:06.237284 | orchestrator | 2026-04-11 04:11:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:06.237342 | orchestrator | 2026-04-11 04:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:09.286545 | orchestrator | 2026-04-11 04:11:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:09.288214 | orchestrator | 2026-04-11 04:11:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:09.288249 | orchestrator | 2026-04-11 04:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:12.343114 | orchestrator | 2026-04-11 04:11:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:12.346839 | orchestrator | 2026-04-11 04:11:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:12.346944 | orchestrator | 2026-04-11 04:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:15.397759 | orchestrator | 2026-04-11 04:11:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:15.399188 | orchestrator | 2026-04-11 04:11:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:15.399233 | orchestrator | 2026-04-11 04:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:18.454541 | orchestrator | 2026-04-11 04:11:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:18.456203 | orchestrator | 2026-04-11 04:11:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:18.456251 | orchestrator | 2026-04-11 04:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:21.504956 | orchestrator | 2026-04-11 04:11:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:21.509213 | orchestrator | 2026-04-11 04:11:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:21.509314 | orchestrator | 2026-04-11 04:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:24.562704 | orchestrator | 2026-04-11 04:11:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:24.567246 | orchestrator | 2026-04-11 04:11:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:24.567862 | orchestrator | 2026-04-11 04:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:27.614162 | orchestrator | 2026-04-11 04:11:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:27.616653 | orchestrator | 2026-04-11 04:11:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:27.616714 | orchestrator | 2026-04-11 04:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:30.669065 | orchestrator | 2026-04-11 04:11:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:30.669959 | orchestrator | 2026-04-11 04:11:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:30.669996 | orchestrator | 2026-04-11 04:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:33.725647 | orchestrator | 2026-04-11 04:11:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:33.725892 | orchestrator | 2026-04-11 04:11:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:33.726545 | orchestrator | 2026-04-11 04:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:36.777290 | orchestrator | 2026-04-11 04:11:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:36.778277 | orchestrator | 2026-04-11 04:11:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:36.778331 | orchestrator | 2026-04-11 04:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:39.832278 | orchestrator | 2026-04-11 04:11:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:39.834656 | orchestrator | 2026-04-11 04:11:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:39.834736 | orchestrator | 2026-04-11 04:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:42.882646 | orchestrator | 2026-04-11 04:11:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:42.882770 | orchestrator | 2026-04-11 04:11:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:42.882791 | orchestrator | 2026-04-11 04:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:45.938715 | orchestrator | 2026-04-11 04:11:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:45.939268 | orchestrator | 2026-04-11 04:11:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:45.939491 | orchestrator | 2026-04-11 04:11:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:48.992130 | orchestrator | 2026-04-11 04:11:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:48.993283 | orchestrator | 2026-04-11 04:11:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:48.993331 | orchestrator | 2026-04-11 04:11:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:52.043259 | orchestrator | 2026-04-11 04:11:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:52.043545 | orchestrator | 2026-04-11 04:11:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:52.043581 | orchestrator | 2026-04-11 04:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:55.088680 | orchestrator | 2026-04-11 04:11:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:55.091344 | orchestrator | 2026-04-11 04:11:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:55.091426 | orchestrator | 2026-04-11 04:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:11:58.135149 | orchestrator | 2026-04-11 04:11:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:11:58.138333 | orchestrator | 2026-04-11 04:11:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:11:58.138397 | orchestrator | 2026-04-11 04:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:01.193999 | orchestrator | 2026-04-11 04:12:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:01.197417 | orchestrator | 2026-04-11 04:12:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:01.197540 | orchestrator | 2026-04-11 04:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:04.255560 | orchestrator | 2026-04-11 04:12:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:04.257400 | orchestrator | 2026-04-11 04:12:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:04.257485 | orchestrator | 2026-04-11 04:12:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:07.306935 | orchestrator | 2026-04-11 04:12:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:07.307898 | orchestrator | 2026-04-11 04:12:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:07.307977 | orchestrator | 2026-04-11 04:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:10.357238 | orchestrator | 2026-04-11 04:12:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:10.357433 | orchestrator | 2026-04-11 04:12:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:10.357492 | orchestrator | 2026-04-11 04:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:13.410276 | orchestrator | 2026-04-11 04:12:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:13.411825 | orchestrator | 2026-04-11 04:12:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:13.411884 | orchestrator | 2026-04-11 04:12:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:16.461694 | orchestrator | 2026-04-11 04:12:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:16.463757 | orchestrator | 2026-04-11 04:12:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:16.463839 | orchestrator | 2026-04-11 04:12:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:19.517897 | orchestrator | 2026-04-11 04:12:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:19.519380 | orchestrator | 2026-04-11 04:12:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:19.519574 | orchestrator | 2026-04-11 04:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:22.568199 | orchestrator | 2026-04-11 04:12:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:22.570735 | orchestrator | 2026-04-11 04:12:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:22.570802 | orchestrator | 2026-04-11 04:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:25.617595 | orchestrator | 2026-04-11 04:12:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:25.620052 | orchestrator | 2026-04-11 04:12:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:25.620189 | orchestrator | 2026-04-11 04:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:28.667319 | orchestrator | 2026-04-11 04:12:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:28.671663 | orchestrator | 2026-04-11 04:12:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:28.671739 | orchestrator | 2026-04-11 04:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:31.719698 | orchestrator | 2026-04-11 04:12:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:31.722332 | orchestrator | 2026-04-11 04:12:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:31.722386 | orchestrator | 2026-04-11 04:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:34.776652 | orchestrator | 2026-04-11 04:12:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:34.779685 | orchestrator | 2026-04-11 04:12:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:34.780047 | orchestrator | 2026-04-11 04:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:37.832036 | orchestrator | 2026-04-11 04:12:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:37.833743 | orchestrator | 2026-04-11 04:12:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:37.833898 | orchestrator | 2026-04-11 04:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:40.884144 | orchestrator | 2026-04-11 04:12:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:40.886198 | orchestrator | 2026-04-11 04:12:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:40.886305 | orchestrator | 2026-04-11 04:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:43.933431 | orchestrator | 2026-04-11 04:12:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:43.935660 | orchestrator | 2026-04-11 04:12:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:43.935740 | orchestrator | 2026-04-11 04:12:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:46.992756 | orchestrator | 2026-04-11 04:12:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:46.993402 | orchestrator | 2026-04-11 04:12:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:46.993425 | orchestrator | 2026-04-11 04:12:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:50.053767 | orchestrator | 2026-04-11 04:12:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:50.054567 | orchestrator | 2026-04-11 04:12:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:50.054589 | orchestrator | 2026-04-11 04:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:53.100138 | orchestrator | 2026-04-11 04:12:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:53.101725 | orchestrator | 2026-04-11 04:12:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:53.101750 | orchestrator | 2026-04-11 04:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:56.140389 | orchestrator | 2026-04-11 04:12:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:56.143089 | orchestrator | 2026-04-11 04:12:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:56.143176 | orchestrator | 2026-04-11 04:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:12:59.185691 | orchestrator | 2026-04-11 04:12:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:12:59.187870 | orchestrator | 2026-04-11 04:12:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:12:59.188055 | orchestrator | 2026-04-11 04:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:02.231230 | orchestrator | 2026-04-11 04:13:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:02.232561 | orchestrator | 2026-04-11 04:13:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:02.232601 | orchestrator | 2026-04-11 04:13:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:05.282809 | orchestrator | 2026-04-11 04:13:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:05.285046 | orchestrator | 2026-04-11 04:13:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:05.285204 | orchestrator | 2026-04-11 04:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:08.326103 | orchestrator | 2026-04-11 04:13:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:08.328553 | orchestrator | 2026-04-11 04:13:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:08.328647 | orchestrator | 2026-04-11 04:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:11.373103 | orchestrator | 2026-04-11 04:13:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:11.374206 | orchestrator | 2026-04-11 04:13:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:11.374236 | orchestrator | 2026-04-11 04:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:14.424422 | orchestrator | 2026-04-11 04:13:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:14.425533 | orchestrator | 2026-04-11 04:13:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:14.425576 | orchestrator | 2026-04-11 04:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:17.466621 | orchestrator | 2026-04-11 04:13:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:17.468098 | orchestrator | 2026-04-11 04:13:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:17.468155 | orchestrator | 2026-04-11 04:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:20.514376 | orchestrator | 2026-04-11 04:13:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:20.515364 | orchestrator | 2026-04-11 04:13:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:20.515410 | orchestrator | 2026-04-11 04:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:23.567660 | orchestrator | 2026-04-11 04:13:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:23.571229 | orchestrator | 2026-04-11 04:13:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:23.571297 | orchestrator | 2026-04-11 04:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:26.621406 | orchestrator | 2026-04-11 04:13:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:26.621939 | orchestrator | 2026-04-11 04:13:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:26.621967 | orchestrator | 2026-04-11 04:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:29.675787 | orchestrator | 2026-04-11 04:13:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:29.677674 | orchestrator | 2026-04-11 04:13:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:29.677820 | orchestrator | 2026-04-11 04:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:32.731714 | orchestrator | 2026-04-11 04:13:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:32.734390 | orchestrator | 2026-04-11 04:13:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:32.734497 | orchestrator | 2026-04-11 04:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:35.787021 | orchestrator | 2026-04-11 04:13:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:35.787962 | orchestrator | 2026-04-11 04:13:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:35.788011 | orchestrator | 2026-04-11 04:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:38.842392 | orchestrator | 2026-04-11 04:13:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:38.844405 | orchestrator | 2026-04-11 04:13:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:38.844503 | orchestrator | 2026-04-11 04:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:41.886635 | orchestrator | 2026-04-11 04:13:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:41.888139 | orchestrator | 2026-04-11 04:13:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:41.888178 | orchestrator | 2026-04-11 04:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:44.930696 | orchestrator | 2026-04-11 04:13:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:44.931553 | orchestrator | 2026-04-11 04:13:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:44.931685 | orchestrator | 2026-04-11 04:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:47.985879 | orchestrator | 2026-04-11 04:13:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:47.987768 | orchestrator | 2026-04-11 04:13:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:47.987892 | orchestrator | 2026-04-11 04:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:51.030256 | orchestrator | 2026-04-11 04:13:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:51.030581 | orchestrator | 2026-04-11 04:13:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:51.031325 | orchestrator | 2026-04-11 04:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:54.077253 | orchestrator | 2026-04-11 04:13:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:54.078146 | orchestrator | 2026-04-11 04:13:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:54.078181 | orchestrator | 2026-04-11 04:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:13:57.126880 | orchestrator | 2026-04-11 04:13:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:13:57.128205 | orchestrator | 2026-04-11 04:13:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:13:57.128253 | orchestrator | 2026-04-11 04:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:00.180358 | orchestrator | 2026-04-11 04:14:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:00.183125 | orchestrator | 2026-04-11 04:14:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:00.183205 | orchestrator | 2026-04-11 04:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:03.219626 | orchestrator | 2026-04-11 04:14:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:03.223119 | orchestrator | 2026-04-11 04:14:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:03.223167 | orchestrator | 2026-04-11 04:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:06.273703 | orchestrator | 2026-04-11 04:14:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:06.276656 | orchestrator | 2026-04-11 04:14:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:06.276818 | orchestrator | 2026-04-11 04:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:09.321360 | orchestrator | 2026-04-11 04:14:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:09.323355 | orchestrator | 2026-04-11 04:14:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:09.323958 | orchestrator | 2026-04-11 04:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:12.371602 | orchestrator | 2026-04-11 04:14:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:12.372559 | orchestrator | 2026-04-11 04:14:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:12.372647 | orchestrator | 2026-04-11 04:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:15.419282 | orchestrator | 2026-04-11 04:14:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:15.422063 | orchestrator | 2026-04-11 04:14:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:15.422126 | orchestrator | 2026-04-11 04:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:18.479387 | orchestrator | 2026-04-11 04:14:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:18.481131 | orchestrator | 2026-04-11 04:14:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:18.481208 | orchestrator | 2026-04-11 04:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:21.529568 | orchestrator | 2026-04-11 04:14:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:21.532812 | orchestrator | 2026-04-11 04:14:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:21.532864 | orchestrator | 2026-04-11 04:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:24.578936 | orchestrator | 2026-04-11 04:14:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:24.579196 | orchestrator | 2026-04-11 04:14:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:24.579220 | orchestrator | 2026-04-11 04:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:27.621958 | orchestrator | 2026-04-11 04:14:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:27.623420 | orchestrator | 2026-04-11 04:14:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:27.623914 | orchestrator | 2026-04-11 04:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:30.669636 | orchestrator | 2026-04-11 04:14:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:30.670343 | orchestrator | 2026-04-11 04:14:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:30.670384 | orchestrator | 2026-04-11 04:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:33.712661 | orchestrator | 2026-04-11 04:14:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:33.714165 | orchestrator | 2026-04-11 04:14:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:33.714321 | orchestrator | 2026-04-11 04:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:36.769419 | orchestrator | 2026-04-11 04:14:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:36.770961 | orchestrator | 2026-04-11 04:14:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:36.771115 | orchestrator | 2026-04-11 04:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:39.827250 | orchestrator | 2026-04-11 04:14:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:39.828868 | orchestrator | 2026-04-11 04:14:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:39.828949 | orchestrator | 2026-04-11 04:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:42.878142 | orchestrator | 2026-04-11 04:14:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:42.879748 | orchestrator | 2026-04-11 04:14:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:42.879827 | orchestrator | 2026-04-11 04:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:45.942920 | orchestrator | 2026-04-11 04:14:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:45.945610 | orchestrator | 2026-04-11 04:14:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:45.945671 | orchestrator | 2026-04-11 04:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:48.999574 | orchestrator | 2026-04-11 04:14:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:49.002323 | orchestrator | 2026-04-11 04:14:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:49.002517 | orchestrator | 2026-04-11 04:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:52.052432 | orchestrator | 2026-04-11 04:14:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:52.053509 | orchestrator | 2026-04-11 04:14:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:52.053558 | orchestrator | 2026-04-11 04:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:55.096695 | orchestrator | 2026-04-11 04:14:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:55.099881 | orchestrator | 2026-04-11 04:14:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:55.099975 | orchestrator | 2026-04-11 04:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:14:58.142700 | orchestrator | 2026-04-11 04:14:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:14:58.144962 | orchestrator | 2026-04-11 04:14:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:14:58.145051 | orchestrator | 2026-04-11 04:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:01.190354 | orchestrator | 2026-04-11 04:15:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:01.193654 | orchestrator | 2026-04-11 04:15:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:01.193773 | orchestrator | 2026-04-11 04:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:04.241975 | orchestrator | 2026-04-11 04:15:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:04.244832 | orchestrator | 2026-04-11 04:15:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:04.244916 | orchestrator | 2026-04-11 04:15:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:07.292987 | orchestrator | 2026-04-11 04:15:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:07.294101 | orchestrator | 2026-04-11 04:15:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:07.294145 | orchestrator | 2026-04-11 04:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:10.343572 | orchestrator | 2026-04-11 04:15:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:10.344393 | orchestrator | 2026-04-11 04:15:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:10.344469 | orchestrator | 2026-04-11 04:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:13.395725 | orchestrator | 2026-04-11 04:15:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:13.397194 | orchestrator | 2026-04-11 04:15:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:13.397230 | orchestrator | 2026-04-11 04:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:16.438826 | orchestrator | 2026-04-11 04:15:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:16.441409 | orchestrator | 2026-04-11 04:15:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:16.441473 | orchestrator | 2026-04-11 04:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:19.489971 | orchestrator | 2026-04-11 04:15:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:19.492424 | orchestrator | 2026-04-11 04:15:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:19.492533 | orchestrator | 2026-04-11 04:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:22.530593 | orchestrator | 2026-04-11 04:15:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:22.531814 | orchestrator | 2026-04-11 04:15:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:22.531889 | orchestrator | 2026-04-11 04:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:25.581724 | orchestrator | 2026-04-11 04:15:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:25.582955 | orchestrator | 2026-04-11 04:15:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:25.583013 | orchestrator | 2026-04-11 04:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:28.629026 | orchestrator | 2026-04-11 04:15:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:28.630726 | orchestrator | 2026-04-11 04:15:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:28.630797 | orchestrator | 2026-04-11 04:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:31.668651 | orchestrator | 2026-04-11 04:15:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:31.669819 | orchestrator | 2026-04-11 04:15:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:31.669890 | orchestrator | 2026-04-11 04:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:34.706959 | orchestrator | 2026-04-11 04:15:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:34.707420 | orchestrator | 2026-04-11 04:15:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:34.707467 | orchestrator | 2026-04-11 04:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:37.752377 | orchestrator | 2026-04-11 04:15:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:37.753299 | orchestrator | 2026-04-11 04:15:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:37.753345 | orchestrator | 2026-04-11 04:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:40.801330 | orchestrator | 2026-04-11 04:15:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:40.802169 | orchestrator | 2026-04-11 04:15:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:40.802206 | orchestrator | 2026-04-11 04:15:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:43.853187 | orchestrator | 2026-04-11 04:15:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:43.854900 | orchestrator | 2026-04-11 04:15:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:43.854985 | orchestrator | 2026-04-11 04:15:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:46.897753 | orchestrator | 2026-04-11 04:15:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:46.899338 | orchestrator | 2026-04-11 04:15:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:46.899422 | orchestrator | 2026-04-11 04:15:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:49.953613 | orchestrator | 2026-04-11 04:15:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:49.956480 | orchestrator | 2026-04-11 04:15:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:49.956598 | orchestrator | 2026-04-11 04:15:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:53.008865 | orchestrator | 2026-04-11 04:15:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:53.009629 | orchestrator | 2026-04-11 04:15:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:53.009668 | orchestrator | 2026-04-11 04:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:56.053833 | orchestrator | 2026-04-11 04:15:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:56.054395 | orchestrator | 2026-04-11 04:15:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:56.054470 | orchestrator | 2026-04-11 04:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:15:59.094733 | orchestrator | 2026-04-11 04:15:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:15:59.096787 | orchestrator | 2026-04-11 04:15:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:15:59.096831 | orchestrator | 2026-04-11 04:15:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:02.137984 | orchestrator | 2026-04-11 04:16:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:02.140298 | orchestrator | 2026-04-11 04:16:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:02.140398 | orchestrator | 2026-04-11 04:16:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:05.184053 | orchestrator | 2026-04-11 04:16:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:05.186164 | orchestrator | 2026-04-11 04:16:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:05.186344 | orchestrator | 2026-04-11 04:16:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:08.231681 | orchestrator | 2026-04-11 04:16:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:08.233936 | orchestrator | 2026-04-11 04:16:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:08.234063 | orchestrator | 2026-04-11 04:16:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:11.282594 | orchestrator | 2026-04-11 04:16:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:11.283552 | orchestrator | 2026-04-11 04:16:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:11.283594 | orchestrator | 2026-04-11 04:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:14.323304 | orchestrator | 2026-04-11 04:16:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:14.323893 | orchestrator | 2026-04-11 04:16:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:14.323916 | orchestrator | 2026-04-11 04:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:17.367984 | orchestrator | 2026-04-11 04:16:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:17.369621 | orchestrator | 2026-04-11 04:16:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:17.369677 | orchestrator | 2026-04-11 04:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:20.407282 | orchestrator | 2026-04-11 04:16:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:20.408376 | orchestrator | 2026-04-11 04:16:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:20.408414 | orchestrator | 2026-04-11 04:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:23.447955 | orchestrator | 2026-04-11 04:16:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:23.448157 | orchestrator | 2026-04-11 04:16:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:23.448182 | orchestrator | 2026-04-11 04:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:26.497637 | orchestrator | 2026-04-11 04:16:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:26.498965 | orchestrator | 2026-04-11 04:16:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:26.499020 | orchestrator | 2026-04-11 04:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:29.550613 | orchestrator | 2026-04-11 04:16:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:29.552589 | orchestrator | 2026-04-11 04:16:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:29.552691 | orchestrator | 2026-04-11 04:16:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:32.596996 | orchestrator | 2026-04-11 04:16:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:32.600091 | orchestrator | 2026-04-11 04:16:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:32.600222 | orchestrator | 2026-04-11 04:16:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:35.648856 | orchestrator | 2026-04-11 04:16:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:35.651266 | orchestrator | 2026-04-11 04:16:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:35.651298 | orchestrator | 2026-04-11 04:16:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:38.707065 | orchestrator | 2026-04-11 04:16:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:38.708664 | orchestrator | 2026-04-11 04:16:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:38.708718 | orchestrator | 2026-04-11 04:16:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:41.767556 | orchestrator | 2026-04-11 04:16:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:41.769620 | orchestrator | 2026-04-11 04:16:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:41.769681 | orchestrator | 2026-04-11 04:16:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:44.815337 | orchestrator | 2026-04-11 04:16:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:44.817626 | orchestrator | 2026-04-11 04:16:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:44.817745 | orchestrator | 2026-04-11 04:16:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:47.858363 | orchestrator | 2026-04-11 04:16:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:47.859404 | orchestrator | 2026-04-11 04:16:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:47.859480 | orchestrator | 2026-04-11 04:16:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:50.911872 | orchestrator | 2026-04-11 04:16:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:50.913373 | orchestrator | 2026-04-11 04:16:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:50.913433 | orchestrator | 2026-04-11 04:16:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:53.964643 | orchestrator | 2026-04-11 04:16:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:53.968530 | orchestrator | 2026-04-11 04:16:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:53.968612 | orchestrator | 2026-04-11 04:16:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:16:57.014156 | orchestrator | 2026-04-11 04:16:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:16:57.016633 | orchestrator | 2026-04-11 04:16:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:16:57.016712 | orchestrator | 2026-04-11 04:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:00.048179 | orchestrator | 2026-04-11 04:17:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:00.049318 | orchestrator | 2026-04-11 04:17:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:00.049370 | orchestrator | 2026-04-11 04:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:03.100794 | orchestrator | 2026-04-11 04:17:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:03.103136 | orchestrator | 2026-04-11 04:17:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:03.103196 | orchestrator | 2026-04-11 04:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:06.142774 | orchestrator | 2026-04-11 04:17:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:06.143788 | orchestrator | 2026-04-11 04:17:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:06.143834 | orchestrator | 2026-04-11 04:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:09.182991 | orchestrator | 2026-04-11 04:17:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:09.183863 | orchestrator | 2026-04-11 04:17:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:09.183894 | orchestrator | 2026-04-11 04:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:12.234201 | orchestrator | 2026-04-11 04:17:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:12.236813 | orchestrator | 2026-04-11 04:17:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:12.236870 | orchestrator | 2026-04-11 04:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:15.286718 | orchestrator | 2026-04-11 04:17:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:15.287098 | orchestrator | 2026-04-11 04:17:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:15.287120 | orchestrator | 2026-04-11 04:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:18.330261 | orchestrator | 2026-04-11 04:17:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:18.331906 | orchestrator | 2026-04-11 04:17:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:18.331952 | orchestrator | 2026-04-11 04:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:21.381408 | orchestrator | 2026-04-11 04:17:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:21.382898 | orchestrator | 2026-04-11 04:17:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:21.382947 | orchestrator | 2026-04-11 04:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:24.419754 | orchestrator | 2026-04-11 04:17:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:24.421823 | orchestrator | 2026-04-11 04:17:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:24.421890 | orchestrator | 2026-04-11 04:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:27.461252 | orchestrator | 2026-04-11 04:17:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:27.461839 | orchestrator | 2026-04-11 04:17:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:27.461892 | orchestrator | 2026-04-11 04:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:30.498918 | orchestrator | 2026-04-11 04:17:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:30.500075 | orchestrator | 2026-04-11 04:17:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:30.500119 | orchestrator | 2026-04-11 04:17:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:33.537311 | orchestrator | 2026-04-11 04:17:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:33.537817 | orchestrator | 2026-04-11 04:17:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:33.537887 | orchestrator | 2026-04-11 04:17:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:36.574382 | orchestrator | 2026-04-11 04:17:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:36.574827 | orchestrator | 2026-04-11 04:17:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:36.574861 | orchestrator | 2026-04-11 04:17:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:39.615567 | orchestrator | 2026-04-11 04:17:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:39.616874 | orchestrator | 2026-04-11 04:17:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:39.617298 | orchestrator | 2026-04-11 04:17:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:42.654267 | orchestrator | 2026-04-11 04:17:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:42.655392 | orchestrator | 2026-04-11 04:17:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:42.655487 | orchestrator | 2026-04-11 04:17:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:45.710192 | orchestrator | 2026-04-11 04:17:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:45.711302 | orchestrator | 2026-04-11 04:17:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:45.711379 | orchestrator | 2026-04-11 04:17:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:48.762558 | orchestrator | 2026-04-11 04:17:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:48.765076 | orchestrator | 2026-04-11 04:17:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:48.765166 | orchestrator | 2026-04-11 04:17:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:51.802156 | orchestrator | 2026-04-11 04:17:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:51.804842 | orchestrator | 2026-04-11 04:17:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:51.804940 | orchestrator | 2026-04-11 04:17:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:54.847268 | orchestrator | 2026-04-11 04:17:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:54.849799 | orchestrator | 2026-04-11 04:17:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:54.849874 | orchestrator | 2026-04-11 04:17:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:17:57.889094 | orchestrator | 2026-04-11 04:17:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:17:57.891036 | orchestrator | 2026-04-11 04:17:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:17:57.891085 | orchestrator | 2026-04-11 04:17:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:00.935566 | orchestrator | 2026-04-11 04:18:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:00.938285 | orchestrator | 2026-04-11 04:18:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:00.938390 | orchestrator | 2026-04-11 04:18:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:03.974230 | orchestrator | 2026-04-11 04:18:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:03.975894 | orchestrator | 2026-04-11 04:18:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:03.975952 | orchestrator | 2026-04-11 04:18:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:07.019156 | orchestrator | 2026-04-11 04:18:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:07.021417 | orchestrator | 2026-04-11 04:18:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:07.021521 | orchestrator | 2026-04-11 04:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:10.062388 | orchestrator | 2026-04-11 04:18:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:10.064208 | orchestrator | 2026-04-11 04:18:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:10.064273 | orchestrator | 2026-04-11 04:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:13.108754 | orchestrator | 2026-04-11 04:18:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:13.111001 | orchestrator | 2026-04-11 04:18:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:13.111077 | orchestrator | 2026-04-11 04:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:16.144615 | orchestrator | 2026-04-11 04:18:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:16.146706 | orchestrator | 2026-04-11 04:18:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:16.146761 | orchestrator | 2026-04-11 04:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:19.195051 | orchestrator | 2026-04-11 04:18:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:19.197001 | orchestrator | 2026-04-11 04:18:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:19.197057 | orchestrator | 2026-04-11 04:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:22.251702 | orchestrator | 2026-04-11 04:18:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:22.254890 | orchestrator | 2026-04-11 04:18:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:22.254965 | orchestrator | 2026-04-11 04:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:25.307725 | orchestrator | 2026-04-11 04:18:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:25.309157 | orchestrator | 2026-04-11 04:18:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:25.309214 | orchestrator | 2026-04-11 04:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:28.370825 | orchestrator | 2026-04-11 04:18:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:28.372762 | orchestrator | 2026-04-11 04:18:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:28.372922 | orchestrator | 2026-04-11 04:18:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:31.420912 | orchestrator | 2026-04-11 04:18:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:31.423354 | orchestrator | 2026-04-11 04:18:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:31.423489 | orchestrator | 2026-04-11 04:18:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:34.470543 | orchestrator | 2026-04-11 04:18:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:34.473542 | orchestrator | 2026-04-11 04:18:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:34.473644 | orchestrator | 2026-04-11 04:18:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:37.519896 | orchestrator | 2026-04-11 04:18:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:37.524212 | orchestrator | 2026-04-11 04:18:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:37.524284 | orchestrator | 2026-04-11 04:18:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:40.566123 | orchestrator | 2026-04-11 04:18:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:40.567061 | orchestrator | 2026-04-11 04:18:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:40.567140 | orchestrator | 2026-04-11 04:18:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:43.615836 | orchestrator | 2026-04-11 04:18:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:43.619034 | orchestrator | 2026-04-11 04:18:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:43.619192 | orchestrator | 2026-04-11 04:18:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:46.660735 | orchestrator | 2026-04-11 04:18:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:46.663256 | orchestrator | 2026-04-11 04:18:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:46.663331 | orchestrator | 2026-04-11 04:18:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:49.710761 | orchestrator | 2026-04-11 04:18:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:49.711858 | orchestrator | 2026-04-11 04:18:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:49.711952 | orchestrator | 2026-04-11 04:18:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:52.761690 | orchestrator | 2026-04-11 04:18:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:52.764468 | orchestrator | 2026-04-11 04:18:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:52.764551 | orchestrator | 2026-04-11 04:18:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:55.804367 | orchestrator | 2026-04-11 04:18:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:55.806226 | orchestrator | 2026-04-11 04:18:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:55.806276 | orchestrator | 2026-04-11 04:18:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:18:58.846877 | orchestrator | 2026-04-11 04:18:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:18:58.849337 | orchestrator | 2026-04-11 04:18:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:18:58.849419 | orchestrator | 2026-04-11 04:18:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:01.901640 | orchestrator | 2026-04-11 04:19:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:01.902717 | orchestrator | 2026-04-11 04:19:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:01.902848 | orchestrator | 2026-04-11 04:19:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:04.963040 | orchestrator | 2026-04-11 04:19:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:04.965754 | orchestrator | 2026-04-11 04:19:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:04.965813 | orchestrator | 2026-04-11 04:19:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:08.009782 | orchestrator | 2026-04-11 04:19:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:08.011762 | orchestrator | 2026-04-11 04:19:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:08.011836 | orchestrator | 2026-04-11 04:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:11.063001 | orchestrator | 2026-04-11 04:19:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:11.064886 | orchestrator | 2026-04-11 04:19:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:11.064964 | orchestrator | 2026-04-11 04:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:14.108801 | orchestrator | 2026-04-11 04:19:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:14.110740 | orchestrator | 2026-04-11 04:19:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:14.110847 | orchestrator | 2026-04-11 04:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:17.157067 | orchestrator | 2026-04-11 04:19:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:17.157827 | orchestrator | 2026-04-11 04:19:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:17.157861 | orchestrator | 2026-04-11 04:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:20.205728 | orchestrator | 2026-04-11 04:19:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:20.206882 | orchestrator | 2026-04-11 04:19:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:20.206946 | orchestrator | 2026-04-11 04:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:23.264096 | orchestrator | 2026-04-11 04:19:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:23.265354 | orchestrator | 2026-04-11 04:19:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:23.265390 | orchestrator | 2026-04-11 04:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:26.306632 | orchestrator | 2026-04-11 04:19:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:26.308597 | orchestrator | 2026-04-11 04:19:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:26.308634 | orchestrator | 2026-04-11 04:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:29.358629 | orchestrator | 2026-04-11 04:19:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:29.361037 | orchestrator | 2026-04-11 04:19:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:29.361067 | orchestrator | 2026-04-11 04:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:32.405895 | orchestrator | 2026-04-11 04:19:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:32.409376 | orchestrator | 2026-04-11 04:19:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:32.409496 | orchestrator | 2026-04-11 04:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:35.466146 | orchestrator | 2026-04-11 04:19:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:35.468802 | orchestrator | 2026-04-11 04:19:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:35.468859 | orchestrator | 2026-04-11 04:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:38.522806 | orchestrator | 2026-04-11 04:19:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:38.525882 | orchestrator | 2026-04-11 04:19:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:38.525909 | orchestrator | 2026-04-11 04:19:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:41.574858 | orchestrator | 2026-04-11 04:19:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:41.575768 | orchestrator | 2026-04-11 04:19:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:41.575797 | orchestrator | 2026-04-11 04:19:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:44.630832 | orchestrator | 2026-04-11 04:19:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:44.633938 | orchestrator | 2026-04-11 04:19:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:44.634006 | orchestrator | 2026-04-11 04:19:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:47.684066 | orchestrator | 2026-04-11 04:19:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:47.685855 | orchestrator | 2026-04-11 04:19:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:47.685908 | orchestrator | 2026-04-11 04:19:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:50.727784 | orchestrator | 2026-04-11 04:19:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:50.729265 | orchestrator | 2026-04-11 04:19:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:50.729348 | orchestrator | 2026-04-11 04:19:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:53.774748 | orchestrator | 2026-04-11 04:19:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:53.778563 | orchestrator | 2026-04-11 04:19:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:53.778637 | orchestrator | 2026-04-11 04:19:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:56.819684 | orchestrator | 2026-04-11 04:19:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:56.820598 | orchestrator | 2026-04-11 04:19:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:56.820680 | orchestrator | 2026-04-11 04:19:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:19:59.868847 | orchestrator | 2026-04-11 04:19:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:19:59.870809 | orchestrator | 2026-04-11 04:19:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:19:59.870865 | orchestrator | 2026-04-11 04:19:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:02.925950 | orchestrator | 2026-04-11 04:20:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:02.928574 | orchestrator | 2026-04-11 04:20:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:02.928834 | orchestrator | 2026-04-11 04:20:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:05.973928 | orchestrator | 2026-04-11 04:20:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:05.975917 | orchestrator | 2026-04-11 04:20:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:05.975963 | orchestrator | 2026-04-11 04:20:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:09.031284 | orchestrator | 2026-04-11 04:20:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:09.033637 | orchestrator | 2026-04-11 04:20:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:09.033725 | orchestrator | 2026-04-11 04:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:12.075680 | orchestrator | 2026-04-11 04:20:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:12.076680 | orchestrator | 2026-04-11 04:20:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:12.076726 | orchestrator | 2026-04-11 04:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:15.124851 | orchestrator | 2026-04-11 04:20:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:15.128209 | orchestrator | 2026-04-11 04:20:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:15.128349 | orchestrator | 2026-04-11 04:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:18.167250 | orchestrator | 2026-04-11 04:20:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:18.168369 | orchestrator | 2026-04-11 04:20:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:18.168416 | orchestrator | 2026-04-11 04:20:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:21.213977 | orchestrator | 2026-04-11 04:20:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:21.215613 | orchestrator | 2026-04-11 04:20:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:21.215688 | orchestrator | 2026-04-11 04:20:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:24.257363 | orchestrator | 2026-04-11 04:20:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:24.260731 | orchestrator | 2026-04-11 04:20:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:24.260773 | orchestrator | 2026-04-11 04:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:27.308117 | orchestrator | 2026-04-11 04:20:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:27.309910 | orchestrator | 2026-04-11 04:20:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:27.309945 | orchestrator | 2026-04-11 04:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:30.350385 | orchestrator | 2026-04-11 04:20:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:30.352013 | orchestrator | 2026-04-11 04:20:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:30.352069 | orchestrator | 2026-04-11 04:20:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:33.400314 | orchestrator | 2026-04-11 04:20:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:33.400985 | orchestrator | 2026-04-11 04:20:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:33.401032 | orchestrator | 2026-04-11 04:20:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:36.442335 | orchestrator | 2026-04-11 04:20:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:36.443867 | orchestrator | 2026-04-11 04:20:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:36.443925 | orchestrator | 2026-04-11 04:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:39.483080 | orchestrator | 2026-04-11 04:20:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:39.485288 | orchestrator | 2026-04-11 04:20:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:39.485354 | orchestrator | 2026-04-11 04:20:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:42.525224 | orchestrator | 2026-04-11 04:20:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:42.526565 | orchestrator | 2026-04-11 04:20:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:42.526719 | orchestrator | 2026-04-11 04:20:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:45.569562 | orchestrator | 2026-04-11 04:20:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:45.570911 | orchestrator | 2026-04-11 04:20:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:45.570953 | orchestrator | 2026-04-11 04:20:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:48.625796 | orchestrator | 2026-04-11 04:20:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:48.629785 | orchestrator | 2026-04-11 04:20:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:48.630113 | orchestrator | 2026-04-11 04:20:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:51.678253 | orchestrator | 2026-04-11 04:20:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:51.680939 | orchestrator | 2026-04-11 04:20:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:51.681002 | orchestrator | 2026-04-11 04:20:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:54.742549 | orchestrator | 2026-04-11 04:20:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:54.744407 | orchestrator | 2026-04-11 04:20:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:54.744561 | orchestrator | 2026-04-11 04:20:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:20:57.791005 | orchestrator | 2026-04-11 04:20:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:20:57.793245 | orchestrator | 2026-04-11 04:20:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:20:57.793320 | orchestrator | 2026-04-11 04:20:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:00.837159 | orchestrator | 2026-04-11 04:21:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:00.838358 | orchestrator | 2026-04-11 04:21:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:00.838663 | orchestrator | 2026-04-11 04:21:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:03.885363 | orchestrator | 2026-04-11 04:21:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:03.886820 | orchestrator | 2026-04-11 04:21:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:03.886858 | orchestrator | 2026-04-11 04:21:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:06.934349 | orchestrator | 2026-04-11 04:21:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:06.936971 | orchestrator | 2026-04-11 04:21:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:06.937046 | orchestrator | 2026-04-11 04:21:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:09.983775 | orchestrator | 2026-04-11 04:21:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:09.986517 | orchestrator | 2026-04-11 04:21:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:09.986688 | orchestrator | 2026-04-11 04:21:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:13.041493 | orchestrator | 2026-04-11 04:21:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:13.042308 | orchestrator | 2026-04-11 04:21:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:13.042341 | orchestrator | 2026-04-11 04:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:16.080794 | orchestrator | 2026-04-11 04:21:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:16.081272 | orchestrator | 2026-04-11 04:21:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:16.081716 | orchestrator | 2026-04-11 04:21:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:19.116078 | orchestrator | 2026-04-11 04:21:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:19.119280 | orchestrator | 2026-04-11 04:21:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:19.119344 | orchestrator | 2026-04-11 04:21:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:22.158875 | orchestrator | 2026-04-11 04:21:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:22.159552 | orchestrator | 2026-04-11 04:21:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:22.159645 | orchestrator | 2026-04-11 04:21:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:25.204118 | orchestrator | 2026-04-11 04:21:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:25.206165 | orchestrator | 2026-04-11 04:21:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:25.206327 | orchestrator | 2026-04-11 04:21:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:28.251349 | orchestrator | 2026-04-11 04:21:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:28.253301 | orchestrator | 2026-04-11 04:21:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:28.253351 | orchestrator | 2026-04-11 04:21:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:31.296256 | orchestrator | 2026-04-11 04:21:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:31.298067 | orchestrator | 2026-04-11 04:21:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:31.298124 | orchestrator | 2026-04-11 04:21:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:34.345359 | orchestrator | 2026-04-11 04:21:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:34.348188 | orchestrator | 2026-04-11 04:21:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:34.348312 | orchestrator | 2026-04-11 04:21:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:37.397778 | orchestrator | 2026-04-11 04:21:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:37.400557 | orchestrator | 2026-04-11 04:21:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:37.400629 | orchestrator | 2026-04-11 04:21:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:40.455475 | orchestrator | 2026-04-11 04:21:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:40.457084 | orchestrator | 2026-04-11 04:21:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:40.457158 | orchestrator | 2026-04-11 04:21:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:43.504236 | orchestrator | 2026-04-11 04:21:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:43.507515 | orchestrator | 2026-04-11 04:21:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:43.507593 | orchestrator | 2026-04-11 04:21:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:46.550598 | orchestrator | 2026-04-11 04:21:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:46.552405 | orchestrator | 2026-04-11 04:21:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:46.552537 | orchestrator | 2026-04-11 04:21:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:49.598536 | orchestrator | 2026-04-11 04:21:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:49.601112 | orchestrator | 2026-04-11 04:21:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:49.601181 | orchestrator | 2026-04-11 04:21:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:52.645946 | orchestrator | 2026-04-11 04:21:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:52.652278 | orchestrator | 2026-04-11 04:21:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:52.652395 | orchestrator | 2026-04-11 04:21:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:55.690968 | orchestrator | 2026-04-11 04:21:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:55.692320 | orchestrator | 2026-04-11 04:21:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:55.692391 | orchestrator | 2026-04-11 04:21:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:21:58.746184 | orchestrator | 2026-04-11 04:21:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:21:58.747538 | orchestrator | 2026-04-11 04:21:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:21:58.747589 | orchestrator | 2026-04-11 04:21:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:01.796982 | orchestrator | 2026-04-11 04:22:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:01.800497 | orchestrator | 2026-04-11 04:22:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:01.800584 | orchestrator | 2026-04-11 04:22:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:04.856043 | orchestrator | 2026-04-11 04:22:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:04.859105 | orchestrator | 2026-04-11 04:22:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:04.859203 | orchestrator | 2026-04-11 04:22:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:07.916850 | orchestrator | 2026-04-11 04:22:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:07.918973 | orchestrator | 2026-04-11 04:22:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:07.919021 | orchestrator | 2026-04-11 04:22:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:10.968878 | orchestrator | 2026-04-11 04:22:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:10.970819 | orchestrator | 2026-04-11 04:22:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:10.970891 | orchestrator | 2026-04-11 04:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:14.016744 | orchestrator | 2026-04-11 04:22:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:14.018417 | orchestrator | 2026-04-11 04:22:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:14.018507 | orchestrator | 2026-04-11 04:22:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:17.063814 | orchestrator | 2026-04-11 04:22:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:17.066100 | orchestrator | 2026-04-11 04:22:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:17.066157 | orchestrator | 2026-04-11 04:22:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:20.109297 | orchestrator | 2026-04-11 04:22:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:20.109825 | orchestrator | 2026-04-11 04:22:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:20.109858 | orchestrator | 2026-04-11 04:22:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:23.155163 | orchestrator | 2026-04-11 04:22:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:23.157048 | orchestrator | 2026-04-11 04:22:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:23.157079 | orchestrator | 2026-04-11 04:22:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:26.206599 | orchestrator | 2026-04-11 04:22:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:26.207305 | orchestrator | 2026-04-11 04:22:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:26.207344 | orchestrator | 2026-04-11 04:22:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:29.249388 | orchestrator | 2026-04-11 04:22:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:29.251812 | orchestrator | 2026-04-11 04:22:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:29.251951 | orchestrator | 2026-04-11 04:22:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:32.297340 | orchestrator | 2026-04-11 04:22:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:32.298351 | orchestrator | 2026-04-11 04:22:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:32.298415 | orchestrator | 2026-04-11 04:22:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:35.345954 | orchestrator | 2026-04-11 04:22:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:35.348163 | orchestrator | 2026-04-11 04:22:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:35.348224 | orchestrator | 2026-04-11 04:22:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:38.396419 | orchestrator | 2026-04-11 04:22:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:38.397570 | orchestrator | 2026-04-11 04:22:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:38.397631 | orchestrator | 2026-04-11 04:22:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:41.439841 | orchestrator | 2026-04-11 04:22:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:41.440553 | orchestrator | 2026-04-11 04:22:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:41.440588 | orchestrator | 2026-04-11 04:22:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:44.492103 | orchestrator | 2026-04-11 04:22:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:44.494678 | orchestrator | 2026-04-11 04:22:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:44.495583 | orchestrator | 2026-04-11 04:22:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:47.537325 | orchestrator | 2026-04-11 04:22:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:47.538611 | orchestrator | 2026-04-11 04:22:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:47.538679 | orchestrator | 2026-04-11 04:22:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:50.581541 | orchestrator | 2026-04-11 04:22:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:50.581640 | orchestrator | 2026-04-11 04:22:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:50.581650 | orchestrator | 2026-04-11 04:22:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:53.629309 | orchestrator | 2026-04-11 04:22:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:53.631754 | orchestrator | 2026-04-11 04:22:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:53.631925 | orchestrator | 2026-04-11 04:22:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:56.684860 | orchestrator | 2026-04-11 04:22:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:56.687612 | orchestrator | 2026-04-11 04:22:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:56.687709 | orchestrator | 2026-04-11 04:22:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:22:59.747372 | orchestrator | 2026-04-11 04:22:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:22:59.749086 | orchestrator | 2026-04-11 04:22:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:22:59.749158 | orchestrator | 2026-04-11 04:22:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:02.802751 | orchestrator | 2026-04-11 04:23:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:02.804591 | orchestrator | 2026-04-11 04:23:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:02.804746 | orchestrator | 2026-04-11 04:23:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:05.846344 | orchestrator | 2026-04-11 04:23:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:05.848614 | orchestrator | 2026-04-11 04:23:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:05.848678 | orchestrator | 2026-04-11 04:23:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:08.902503 | orchestrator | 2026-04-11 04:23:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:08.903808 | orchestrator | 2026-04-11 04:23:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:08.903868 | orchestrator | 2026-04-11 04:23:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:11.954131 | orchestrator | 2026-04-11 04:23:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:11.955704 | orchestrator | 2026-04-11 04:23:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:11.955742 | orchestrator | 2026-04-11 04:23:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:15.012329 | orchestrator | 2026-04-11 04:23:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:15.014768 | orchestrator | 2026-04-11 04:23:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:15.014833 | orchestrator | 2026-04-11 04:23:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:18.059853 | orchestrator | 2026-04-11 04:23:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:18.061566 | orchestrator | 2026-04-11 04:23:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:18.061600 | orchestrator | 2026-04-11 04:23:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:21.104443 | orchestrator | 2026-04-11 04:23:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:21.105071 | orchestrator | 2026-04-11 04:23:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:21.105098 | orchestrator | 2026-04-11 04:23:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:24.137958 | orchestrator | 2026-04-11 04:23:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:24.138762 | orchestrator | 2026-04-11 04:23:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:24.138787 | orchestrator | 2026-04-11 04:23:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:27.184204 | orchestrator | 2026-04-11 04:23:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:27.185830 | orchestrator | 2026-04-11 04:23:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:27.185888 | orchestrator | 2026-04-11 04:23:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:30.233690 | orchestrator | 2026-04-11 04:23:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:30.234757 | orchestrator | 2026-04-11 04:23:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:30.234801 | orchestrator | 2026-04-11 04:23:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:33.282940 | orchestrator | 2026-04-11 04:23:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:33.284828 | orchestrator | 2026-04-11 04:23:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:33.284870 | orchestrator | 2026-04-11 04:23:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:36.325103 | orchestrator | 2026-04-11 04:23:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:36.325820 | orchestrator | 2026-04-11 04:23:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:36.325873 | orchestrator | 2026-04-11 04:23:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:39.373243 | orchestrator | 2026-04-11 04:23:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:39.374408 | orchestrator | 2026-04-11 04:23:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:39.374460 | orchestrator | 2026-04-11 04:23:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:42.412593 | orchestrator | 2026-04-11 04:23:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:42.413201 | orchestrator | 2026-04-11 04:23:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:42.413249 | orchestrator | 2026-04-11 04:23:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:45.454450 | orchestrator | 2026-04-11 04:23:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:45.455889 | orchestrator | 2026-04-11 04:23:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:45.455948 | orchestrator | 2026-04-11 04:23:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:48.505254 | orchestrator | 2026-04-11 04:23:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:48.507094 | orchestrator | 2026-04-11 04:23:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:48.507159 | orchestrator | 2026-04-11 04:23:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:51.552895 | orchestrator | 2026-04-11 04:23:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:51.555094 | orchestrator | 2026-04-11 04:23:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:51.555176 | orchestrator | 2026-04-11 04:23:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:54.602828 | orchestrator | 2026-04-11 04:23:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:54.604066 | orchestrator | 2026-04-11 04:23:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:54.604095 | orchestrator | 2026-04-11 04:23:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:23:57.647674 | orchestrator | 2026-04-11 04:23:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:23:57.647919 | orchestrator | 2026-04-11 04:23:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:23:57.648664 | orchestrator | 2026-04-11 04:23:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:00.696094 | orchestrator | 2026-04-11 04:24:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:00.697544 | orchestrator | 2026-04-11 04:24:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:00.697579 | orchestrator | 2026-04-11 04:24:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:03.749608 | orchestrator | 2026-04-11 04:24:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:03.750420 | orchestrator | 2026-04-11 04:24:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:03.750461 | orchestrator | 2026-04-11 04:24:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:06.798105 | orchestrator | 2026-04-11 04:24:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:06.801349 | orchestrator | 2026-04-11 04:24:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:06.801408 | orchestrator | 2026-04-11 04:24:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:09.838174 | orchestrator | 2026-04-11 04:24:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:09.838272 | orchestrator | 2026-04-11 04:24:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:09.838282 | orchestrator | 2026-04-11 04:24:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:12.882379 | orchestrator | 2026-04-11 04:24:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:12.883895 | orchestrator | 2026-04-11 04:24:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:12.883969 | orchestrator | 2026-04-11 04:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:15.931794 | orchestrator | 2026-04-11 04:24:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:15.933892 | orchestrator | 2026-04-11 04:24:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:15.933953 | orchestrator | 2026-04-11 04:24:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:18.985737 | orchestrator | 2026-04-11 04:24:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:18.986227 | orchestrator | 2026-04-11 04:24:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:18.987072 | orchestrator | 2026-04-11 04:24:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:22.038468 | orchestrator | 2026-04-11 04:24:22 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:22.039610 | orchestrator | 2026-04-11 04:24:22 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:22.039639 | orchestrator | 2026-04-11 04:24:22 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:25.086321 | orchestrator | 2026-04-11 04:24:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:25.086606 | orchestrator | 2026-04-11 04:24:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:25.086631 | orchestrator | 2026-04-11 04:24:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:28.129031 | orchestrator | 2026-04-11 04:24:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:28.131750 | orchestrator | 2026-04-11 04:24:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:28.131909 | orchestrator | 2026-04-11 04:24:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:31.186048 | orchestrator | 2026-04-11 04:24:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:31.188700 | orchestrator | 2026-04-11 04:24:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:31.188755 | orchestrator | 2026-04-11 04:24:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:34.242003 | orchestrator | 2026-04-11 04:24:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:34.243182 | orchestrator | 2026-04-11 04:24:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:34.243831 | orchestrator | 2026-04-11 04:24:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:37.285195 | orchestrator | 2026-04-11 04:24:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:37.286375 | orchestrator | 2026-04-11 04:24:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:37.286478 | orchestrator | 2026-04-11 04:24:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:40.331961 | orchestrator | 2026-04-11 04:24:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:40.333089 | orchestrator | 2026-04-11 04:24:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:40.333109 | orchestrator | 2026-04-11 04:24:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:24:43.376128 | orchestrator | 2026-04-11 04:24:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:24:43.377227 | orchestrator | 2026-04-11 04:24:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:24:43.377281 | orchestrator | 2026-04-11 04:24:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:26:46.519802 | orchestrator | 2026-04-11 04:26:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:26:46.519907 | orchestrator | 2026-04-11 04:26:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:26:46.519959 | orchestrator | 2026-04-11 04:26:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:26:49.572224 | orchestrator | 2026-04-11 04:26:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:26:49.575286 | orchestrator | 2026-04-11 04:26:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:26:49.575349 | orchestrator | 2026-04-11 04:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:26:52.616218 | orchestrator | 2026-04-11 04:26:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:26:52.616945 | orchestrator | 2026-04-11 04:26:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:26:52.617015 | orchestrator | 2026-04-11 04:26:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:26:55.675602 | orchestrator | 2026-04-11 04:26:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:26:55.678544 | orchestrator | 2026-04-11 04:26:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:26:55.678630 | orchestrator | 2026-04-11 04:26:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:26:58.724135 | orchestrator | 2026-04-11 04:26:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:26:58.725478 | orchestrator | 2026-04-11 04:26:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:26:58.725552 | orchestrator | 2026-04-11 04:26:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:01.776193 | orchestrator | 2026-04-11 04:27:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:01.778066 | orchestrator | 2026-04-11 04:27:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:01.778100 | orchestrator | 2026-04-11 04:27:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:04.823918 | orchestrator | 2026-04-11 04:27:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:04.825510 | orchestrator | 2026-04-11 04:27:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:04.825568 | orchestrator | 2026-04-11 04:27:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:07.874217 | orchestrator | 2026-04-11 04:27:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:07.876600 | orchestrator | 2026-04-11 04:27:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:07.876671 | orchestrator | 2026-04-11 04:27:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:10.923421 | orchestrator | 2026-04-11 04:27:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:10.924916 | orchestrator | 2026-04-11 04:27:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:10.924962 | orchestrator | 2026-04-11 04:27:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:13.973688 | orchestrator | 2026-04-11 04:27:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:13.975764 | orchestrator | 2026-04-11 04:27:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:13.975825 | orchestrator | 2026-04-11 04:27:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:17.017275 | orchestrator | 2026-04-11 04:27:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:17.019941 | orchestrator | 2026-04-11 04:27:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:17.020024 | orchestrator | 2026-04-11 04:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:20.060369 | orchestrator | 2026-04-11 04:27:20 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:20.061632 | orchestrator | 2026-04-11 04:27:20 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:20.061661 | orchestrator | 2026-04-11 04:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:23.104444 | orchestrator | 2026-04-11 04:27:23 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:23.105751 | orchestrator | 2026-04-11 04:27:23 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:23.105796 | orchestrator | 2026-04-11 04:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:26.151935 | orchestrator | 2026-04-11 04:27:26 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:26.153428 | orchestrator | 2026-04-11 04:27:26 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:26.153485 | orchestrator | 2026-04-11 04:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:29.192884 | orchestrator | 2026-04-11 04:27:29 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:29.196638 | orchestrator | 2026-04-11 04:27:29 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:29.196720 | orchestrator | 2026-04-11 04:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:32.249906 | orchestrator | 2026-04-11 04:27:32 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:32.251509 | orchestrator | 2026-04-11 04:27:32 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:32.251573 | orchestrator | 2026-04-11 04:27:32 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:35.303568 | orchestrator | 2026-04-11 04:27:35 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:35.304592 | orchestrator | 2026-04-11 04:27:35 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:35.304652 | orchestrator | 2026-04-11 04:27:35 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:38.355365 | orchestrator | 2026-04-11 04:27:38 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:38.357430 | orchestrator | 2026-04-11 04:27:38 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:38.357487 | orchestrator | 2026-04-11 04:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:41.405667 | orchestrator | 2026-04-11 04:27:41 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:41.407339 | orchestrator | 2026-04-11 04:27:41 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:41.407457 | orchestrator | 2026-04-11 04:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:44.458901 | orchestrator | 2026-04-11 04:27:44 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:44.460211 | orchestrator | 2026-04-11 04:27:44 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:44.460334 | orchestrator | 2026-04-11 04:27:44 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:47.509500 | orchestrator | 2026-04-11 04:27:47 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:47.512219 | orchestrator | 2026-04-11 04:27:47 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:47.512346 | orchestrator | 2026-04-11 04:27:47 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:50.560040 | orchestrator | 2026-04-11 04:27:50 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:50.562418 | orchestrator | 2026-04-11 04:27:50 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:50.562484 | orchestrator | 2026-04-11 04:27:50 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:53.611609 | orchestrator | 2026-04-11 04:27:53 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:53.611905 | orchestrator | 2026-04-11 04:27:53 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:53.612812 | orchestrator | 2026-04-11 04:27:53 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:56.651032 | orchestrator | 2026-04-11 04:27:56 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:56.651613 | orchestrator | 2026-04-11 04:27:56 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:56.651665 | orchestrator | 2026-04-11 04:27:56 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:27:59.696747 | orchestrator | 2026-04-11 04:27:59 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:27:59.698321 | orchestrator | 2026-04-11 04:27:59 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:27:59.698355 | orchestrator | 2026-04-11 04:27:59 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:02.743979 | orchestrator | 2026-04-11 04:28:02 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:02.744229 | orchestrator | 2026-04-11 04:28:02 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:02.744466 | orchestrator | 2026-04-11 04:28:02 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:05.788705 | orchestrator | 2026-04-11 04:28:05 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:05.789949 | orchestrator | 2026-04-11 04:28:05 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:05.789993 | orchestrator | 2026-04-11 04:28:05 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:08.841705 | orchestrator | 2026-04-11 04:28:08 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:08.843049 | orchestrator | 2026-04-11 04:28:08 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:08.843091 | orchestrator | 2026-04-11 04:28:08 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:11.892410 | orchestrator | 2026-04-11 04:28:11 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:11.894329 | orchestrator | 2026-04-11 04:28:11 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:11.894375 | orchestrator | 2026-04-11 04:28:11 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:14.943223 | orchestrator | 2026-04-11 04:28:14 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:14.944561 | orchestrator | 2026-04-11 04:28:14 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:14.944596 | orchestrator | 2026-04-11 04:28:14 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:17.993509 | orchestrator | 2026-04-11 04:28:17 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:17.994965 | orchestrator | 2026-04-11 04:28:17 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:17.995015 | orchestrator | 2026-04-11 04:28:17 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:21.044674 | orchestrator | 2026-04-11 04:28:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:21.045532 | orchestrator | 2026-04-11 04:28:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:21.045589 | orchestrator | 2026-04-11 04:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:24.093935 | orchestrator | 2026-04-11 04:28:24 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:24.094406 | orchestrator | 2026-04-11 04:28:24 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:24.094448 | orchestrator | 2026-04-11 04:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:27.131050 | orchestrator | 2026-04-11 04:28:27 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:27.131878 | orchestrator | 2026-04-11 04:28:27 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:27.131938 | orchestrator | 2026-04-11 04:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:30.173167 | orchestrator | 2026-04-11 04:28:30 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:30.174311 | orchestrator | 2026-04-11 04:28:30 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:30.174346 | orchestrator | 2026-04-11 04:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:33.211660 | orchestrator | 2026-04-11 04:28:33 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:33.214347 | orchestrator | 2026-04-11 04:28:33 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:33.214409 | orchestrator | 2026-04-11 04:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:36.259784 | orchestrator | 2026-04-11 04:28:36 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:36.259971 | orchestrator | 2026-04-11 04:28:36 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:36.259990 | orchestrator | 2026-04-11 04:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:39.303754 | orchestrator | 2026-04-11 04:28:39 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:39.306936 | orchestrator | 2026-04-11 04:28:39 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:39.307037 | orchestrator | 2026-04-11 04:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:42.354120 | orchestrator | 2026-04-11 04:28:42 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:42.355031 | orchestrator | 2026-04-11 04:28:42 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:42.355074 | orchestrator | 2026-04-11 04:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:45.397896 | orchestrator | 2026-04-11 04:28:45 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:45.400065 | orchestrator | 2026-04-11 04:28:45 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:45.400104 | orchestrator | 2026-04-11 04:28:45 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:48.452898 | orchestrator | 2026-04-11 04:28:48 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:48.454503 | orchestrator | 2026-04-11 04:28:48 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:48.454563 | orchestrator | 2026-04-11 04:28:48 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:51.498647 | orchestrator | 2026-04-11 04:28:51 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:51.501568 | orchestrator | 2026-04-11 04:28:51 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:51.501621 | orchestrator | 2026-04-11 04:28:51 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:54.555749 | orchestrator | 2026-04-11 04:28:54 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:54.558257 | orchestrator | 2026-04-11 04:28:54 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:54.558329 | orchestrator | 2026-04-11 04:28:54 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:28:57.599805 | orchestrator | 2026-04-11 04:28:57 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:28:57.601844 | orchestrator | 2026-04-11 04:28:57 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:28:57.601902 | orchestrator | 2026-04-11 04:28:57 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:00.654617 | orchestrator | 2026-04-11 04:29:00 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:00.656072 | orchestrator | 2026-04-11 04:29:00 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:00.656143 | orchestrator | 2026-04-11 04:29:00 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:03.709251 | orchestrator | 2026-04-11 04:29:03 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:03.711371 | orchestrator | 2026-04-11 04:29:03 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:03.711506 | orchestrator | 2026-04-11 04:29:03 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:06.748550 | orchestrator | 2026-04-11 04:29:06 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:06.749964 | orchestrator | 2026-04-11 04:29:06 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:06.750098 | orchestrator | 2026-04-11 04:29:06 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:09.788003 | orchestrator | 2026-04-11 04:29:09 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:09.788708 | orchestrator | 2026-04-11 04:29:09 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:09.788748 | orchestrator | 2026-04-11 04:29:09 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:12.824540 | orchestrator | 2026-04-11 04:29:12 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:12.825907 | orchestrator | 2026-04-11 04:29:12 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:12.825947 | orchestrator | 2026-04-11 04:29:12 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:15.866084 | orchestrator | 2026-04-11 04:29:15 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:15.866391 | orchestrator | 2026-04-11 04:29:15 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:15.866421 | orchestrator | 2026-04-11 04:29:15 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:18.910719 | orchestrator | 2026-04-11 04:29:18 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:18.912820 | orchestrator | 2026-04-11 04:29:18 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:18.912930 | orchestrator | 2026-04-11 04:29:18 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:21.957344 | orchestrator | 2026-04-11 04:29:21 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:21.958858 | orchestrator | 2026-04-11 04:29:21 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:21.959106 | orchestrator | 2026-04-11 04:29:21 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:25.011335 | orchestrator | 2026-04-11 04:29:25 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:25.012655 | orchestrator | 2026-04-11 04:29:25 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:25.012704 | orchestrator | 2026-04-11 04:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:28.060414 | orchestrator | 2026-04-11 04:29:28 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:28.060926 | orchestrator | 2026-04-11 04:29:28 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:28.061643 | orchestrator | 2026-04-11 04:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:31.109653 | orchestrator | 2026-04-11 04:29:31 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:31.110306 | orchestrator | 2026-04-11 04:29:31 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:31.110454 | orchestrator | 2026-04-11 04:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:34.155724 | orchestrator | 2026-04-11 04:29:34 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:34.158592 | orchestrator | 2026-04-11 04:29:34 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:34.158642 | orchestrator | 2026-04-11 04:29:34 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:37.202632 | orchestrator | 2026-04-11 04:29:37 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:37.204163 | orchestrator | 2026-04-11 04:29:37 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:37.204217 | orchestrator | 2026-04-11 04:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:40.253747 | orchestrator | 2026-04-11 04:29:40 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:40.255884 | orchestrator | 2026-04-11 04:29:40 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:40.255966 | orchestrator | 2026-04-11 04:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:43.298921 | orchestrator | 2026-04-11 04:29:43 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:43.300644 | orchestrator | 2026-04-11 04:29:43 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:43.300702 | orchestrator | 2026-04-11 04:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:46.341365 | orchestrator | 2026-04-11 04:29:46 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:46.342512 | orchestrator | 2026-04-11 04:29:46 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:46.342611 | orchestrator | 2026-04-11 04:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:49.392072 | orchestrator | 2026-04-11 04:29:49 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:49.393720 | orchestrator | 2026-04-11 04:29:49 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:49.393757 | orchestrator | 2026-04-11 04:29:49 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:52.442568 | orchestrator | 2026-04-11 04:29:52 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:52.444702 | orchestrator | 2026-04-11 04:29:52 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:52.444761 | orchestrator | 2026-04-11 04:29:52 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:55.491676 | orchestrator | 2026-04-11 04:29:55 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:55.492051 | orchestrator | 2026-04-11 04:29:55 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:55.492128 | orchestrator | 2026-04-11 04:29:55 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:29:58.536734 | orchestrator | 2026-04-11 04:29:58 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:29:58.537801 | orchestrator | 2026-04-11 04:29:58 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:29:58.537872 | orchestrator | 2026-04-11 04:29:58 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:30:01.577977 | orchestrator | 2026-04-11 04:30:01 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:30:01.579313 | orchestrator | 2026-04-11 04:30:01 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:30:01.579368 | orchestrator | 2026-04-11 04:30:01 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:30:04.615548 | orchestrator | 2026-04-11 04:30:04 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:30:04.616242 | orchestrator | 2026-04-11 04:30:04 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:30:04.616281 | orchestrator | 2026-04-11 04:30:04 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:30:07.656539 | orchestrator | 2026-04-11 04:30:07 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:30:07.659041 | orchestrator | 2026-04-11 04:30:07 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:30:07.659097 | orchestrator | 2026-04-11 04:30:07 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:30:10.707165 | orchestrator | 2026-04-11 04:30:10 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:30:10.707392 | orchestrator | 2026-04-11 04:30:10 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:30:10.707430 | orchestrator | 2026-04-11 04:30:10 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:30:13.751746 | orchestrator | 2026-04-11 04:30:13 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:30:13.752726 | orchestrator | 2026-04-11 04:30:13 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:30:13.752895 | orchestrator | 2026-04-11 04:30:13 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:30:16.800410 | orchestrator | 2026-04-11 04:30:16 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:30:16.800752 | orchestrator | 2026-04-11 04:30:16 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:30:16.801360 | orchestrator | 2026-04-11 04:30:16 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:30:19.852101 | orchestrator | 2026-04-11 04:30:19 | INFO  | Task e775d4e9-d8d9-4a6e-852d-ec153b94cbfc is in state STARTED 2026-04-11 04:30:19.853580 | orchestrator | 2026-04-11 04:30:19 | INFO  | Task 27a6903c-a509-4fb9-bf78-996bf8bed0c6 is in state STARTED 2026-04-11 04:30:19.853641 | orchestrator | 2026-04-11 04:30:19 | INFO  | Wait 1 second(s) until the next check 2026-04-11 04:30:22.290971 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-11 04:30:22.298746 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-11 04:30:23.147574 | 2026-04-11 04:30:23.147748 | PLAY [Post output play] 2026-04-11 04:30:23.165393 | 2026-04-11 04:30:23.165546 | LOOP [stage-output : Register sources] 2026-04-11 04:30:23.237435 | 2026-04-11 04:30:23.237765 | TASK [stage-output : Check sudo] 2026-04-11 04:30:24.155502 | orchestrator | sudo: a password is required 2026-04-11 04:30:24.277716 | orchestrator | ok: Runtime: 0:00:00.011405 2026-04-11 04:30:24.284905 | 2026-04-11 04:30:24.285025 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-11 04:30:24.316710 | 2026-04-11 04:30:24.316908 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-11 04:30:24.386974 | orchestrator | ok 2026-04-11 04:30:24.395426 | 2026-04-11 04:30:24.395561 | LOOP [stage-output : Ensure target folders exist] 2026-04-11 04:30:24.899639 | orchestrator | ok: "docs" 2026-04-11 04:30:24.900089 | 2026-04-11 04:30:25.200236 | orchestrator | ok: "artifacts" 2026-04-11 04:30:25.478423 | orchestrator | ok: "logs" 2026-04-11 04:30:25.492376 | 2026-04-11 04:30:25.492532 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-11 04:30:25.528003 | 2026-04-11 04:30:25.528330 | TASK [stage-output : Make all log files readable] 2026-04-11 04:30:25.865994 | orchestrator | ok 2026-04-11 04:30:25.875923 | 2026-04-11 04:30:25.876079 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-11 04:30:25.911064 | orchestrator | skipping: Conditional result was False 2026-04-11 04:30:25.926756 | 2026-04-11 04:30:25.927015 | TASK [stage-output : Discover log files for compression] 2026-04-11 04:30:25.951717 | orchestrator | skipping: Conditional result was False 2026-04-11 04:30:25.961357 | 2026-04-11 04:30:25.961497 | LOOP [stage-output : Archive everything from logs] 2026-04-11 04:30:26.007729 | 2026-04-11 04:30:26.007927 | PLAY [Post cleanup play] 2026-04-11 04:30:26.017408 | 2026-04-11 04:30:26.017524 | TASK [Set cloud fact (Zuul deployment)] 2026-04-11 04:30:26.075540 | orchestrator | ok 2026-04-11 04:30:26.086428 | 2026-04-11 04:30:26.086553 | TASK [Set cloud fact (local deployment)] 2026-04-11 04:30:26.112417 | orchestrator | skipping: Conditional result was False 2026-04-11 04:30:26.127698 | 2026-04-11 04:30:26.127863 | TASK [Clean the cloud environment] 2026-04-11 04:30:26.869861 | orchestrator | 2026-04-11 04:30:26 - clean up servers 2026-04-11 04:30:27.872103 | orchestrator | 2026-04-11 04:30:27 - testbed-manager 2026-04-11 04:30:27.958121 | orchestrator | 2026-04-11 04:30:27 - testbed-node-5 2026-04-11 04:30:28.058492 | orchestrator | 2026-04-11 04:30:28 - testbed-node-1 2026-04-11 04:30:28.176307 | orchestrator | 2026-04-11 04:30:28 - testbed-node-4 2026-04-11 04:30:28.276368 | orchestrator | 2026-04-11 04:30:28 - testbed-node-3 2026-04-11 04:30:28.368597 | orchestrator | 2026-04-11 04:30:28 - testbed-node-0 2026-04-11 04:30:28.484179 | orchestrator | 2026-04-11 04:30:28 - testbed-node-2 2026-04-11 04:30:28.565205 | orchestrator | 2026-04-11 04:30:28 - clean up keypairs 2026-04-11 04:30:28.583408 | orchestrator | 2026-04-11 04:30:28 - testbed 2026-04-11 04:30:28.608263 | orchestrator | 2026-04-11 04:30:28 - wait for servers to be gone 2026-04-11 04:30:39.545391 | orchestrator | 2026-04-11 04:30:39 - clean up ports 2026-04-11 04:30:39.745292 | orchestrator | 2026-04-11 04:30:39 - 2675e5f3-e203-4e41-9155-40821df53afb 2026-04-11 04:30:40.825663 | orchestrator | 2026-04-11 04:30:40 - 2f210c2b-dffa-457b-8eec-d6aef2448544 2026-04-11 04:30:41.178997 | orchestrator | 2026-04-11 04:30:41 - 9d87224b-605b-4397-8197-29fbca720203 2026-04-11 04:30:41.454702 | orchestrator | 2026-04-11 04:30:41 - cee7ffa5-cf29-4376-9e7c-f48b9b88bde6 2026-04-11 04:30:41.715420 | orchestrator | 2026-04-11 04:30:41 - d5c72eea-71b3-4034-b533-ded7ce17674e 2026-04-11 04:30:41.994571 | orchestrator | 2026-04-11 04:30:41 - de529e63-8638-4c26-89cd-d2f42cfa0470 2026-04-11 04:30:42.208224 | orchestrator | 2026-04-11 04:30:42 - e1d2b0c2-6da0-42e8-b16c-b80f35010f4c 2026-04-11 04:30:42.692813 | orchestrator | 2026-04-11 04:30:42 - clean up volumes 2026-04-11 04:30:42.829140 | orchestrator | 2026-04-11 04:30:42 - testbed-volume-manager-base 2026-04-11 04:30:42.867783 | orchestrator | 2026-04-11 04:30:42 - testbed-volume-1-node-base 2026-04-11 04:30:42.908662 | orchestrator | 2026-04-11 04:30:42 - testbed-volume-3-node-base 2026-04-11 04:30:42.948379 | orchestrator | 2026-04-11 04:30:42 - testbed-volume-5-node-base 2026-04-11 04:30:42.995039 | orchestrator | 2026-04-11 04:30:42 - testbed-volume-2-node-base 2026-04-11 04:30:43.039902 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-0-node-base 2026-04-11 04:30:43.083986 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-4-node-base 2026-04-11 04:30:43.130008 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-8-node-5 2026-04-11 04:30:43.175114 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-6-node-3 2026-04-11 04:30:43.225390 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-0-node-3 2026-04-11 04:30:43.271935 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-2-node-5 2026-04-11 04:30:43.315944 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-5-node-5 2026-04-11 04:30:43.361770 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-7-node-4 2026-04-11 04:30:43.408742 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-3-node-3 2026-04-11 04:30:43.453901 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-4-node-4 2026-04-11 04:30:43.502845 | orchestrator | 2026-04-11 04:30:43 - testbed-volume-1-node-4 2026-04-11 04:30:43.546381 | orchestrator | 2026-04-11 04:30:43 - disconnect routers 2026-04-11 04:30:43.765471 | orchestrator | 2026-04-11 04:30:43 - testbed 2026-04-11 04:30:44.833450 | orchestrator | 2026-04-11 04:30:44 - clean up subnets 2026-04-11 04:30:44.885906 | orchestrator | 2026-04-11 04:30:44 - subnet-testbed-management 2026-04-11 04:30:45.045368 | orchestrator | 2026-04-11 04:30:45 - clean up networks 2026-04-11 04:30:45.217821 | orchestrator | 2026-04-11 04:30:45 - net-testbed-management 2026-04-11 04:30:45.511195 | orchestrator | 2026-04-11 04:30:45 - clean up security groups 2026-04-11 04:30:45.553875 | orchestrator | 2026-04-11 04:30:45 - testbed-node 2026-04-11 04:30:45.668582 | orchestrator | 2026-04-11 04:30:45 - testbed-management 2026-04-11 04:30:45.799432 | orchestrator | 2026-04-11 04:30:45 - clean up floating ips 2026-04-11 04:30:45.838406 | orchestrator | 2026-04-11 04:30:45 - 81.163.193.75 2026-04-11 04:30:46.200661 | orchestrator | 2026-04-11 04:30:46 - clean up routers 2026-04-11 04:30:46.336223 | orchestrator | 2026-04-11 04:30:46 - testbed 2026-04-11 04:30:48.188691 | orchestrator | ok: Runtime: 0:00:21.532414 2026-04-11 04:30:48.193236 | 2026-04-11 04:30:48.193415 | PLAY RECAP 2026-04-11 04:30:48.193558 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-11 04:30:48.193629 | 2026-04-11 04:30:48.329689 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-11 04:30:48.330722 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-11 04:30:49.092391 | 2026-04-11 04:30:49.092567 | PLAY [Cleanup play] 2026-04-11 04:30:49.109056 | 2026-04-11 04:30:49.109300 | TASK [Set cloud fact (Zuul deployment)] 2026-04-11 04:30:49.161940 | orchestrator | ok 2026-04-11 04:30:49.169005 | 2026-04-11 04:30:49.169133 | TASK [Set cloud fact (local deployment)] 2026-04-11 04:30:49.205375 | orchestrator | skipping: Conditional result was False 2026-04-11 04:30:49.224602 | 2026-04-11 04:30:49.225181 | TASK [Clean the cloud environment] 2026-04-11 04:30:50.434793 | orchestrator | 2026-04-11 04:30:50 - clean up servers 2026-04-11 04:30:51.081336 | orchestrator | 2026-04-11 04:30:51 - clean up keypairs 2026-04-11 04:30:51.100647 | orchestrator | 2026-04-11 04:30:51 - wait for servers to be gone 2026-04-11 04:30:51.157166 | orchestrator | 2026-04-11 04:30:51 - clean up ports 2026-04-11 04:30:51.252688 | orchestrator | 2026-04-11 04:30:51 - clean up volumes 2026-04-11 04:30:51.331205 | orchestrator | 2026-04-11 04:30:51 - disconnect routers 2026-04-11 04:30:51.358506 | orchestrator | 2026-04-11 04:30:51 - clean up subnets 2026-04-11 04:30:51.399654 | orchestrator | 2026-04-11 04:30:51 - clean up networks 2026-04-11 04:30:51.537583 | orchestrator | 2026-04-11 04:30:51 - clean up security groups 2026-04-11 04:30:51.573660 | orchestrator | 2026-04-11 04:30:51 - clean up floating ips 2026-04-11 04:30:51.597880 | orchestrator | 2026-04-11 04:30:51 - clean up routers 2026-04-11 04:30:51.777646 | orchestrator | ok: Runtime: 0:00:01.594619 2026-04-11 04:30:51.779485 | 2026-04-11 04:30:51.779570 | PLAY RECAP 2026-04-11 04:30:51.779626 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-11 04:30:51.779651 | 2026-04-11 04:30:51.919570 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-11 04:30:51.920738 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-11 04:30:52.780093 | 2026-04-11 04:30:52.780417 | PLAY [Base post-fetch] 2026-04-11 04:30:52.800907 | 2026-04-11 04:30:52.801152 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-11 04:30:52.880305 | orchestrator | skipping: Conditional result was False 2026-04-11 04:30:52.892434 | 2026-04-11 04:30:52.892644 | TASK [fetch-output : Set log path for single node] 2026-04-11 04:30:52.951263 | orchestrator | ok 2026-04-11 04:30:52.960359 | 2026-04-11 04:30:52.960511 | LOOP [fetch-output : Ensure local output dirs] 2026-04-11 04:30:53.542106 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0543a1f4268f47198bb837b73a2a155d/work/logs" 2026-04-11 04:30:53.836784 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0543a1f4268f47198bb837b73a2a155d/work/artifacts" 2026-04-11 04:30:54.126160 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0543a1f4268f47198bb837b73a2a155d/work/docs" 2026-04-11 04:30:54.151655 | 2026-04-11 04:30:54.151834 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-11 04:30:55.108588 | orchestrator | changed: .d..t...... ./ 2026-04-11 04:30:55.109882 | orchestrator | changed: All items complete 2026-04-11 04:30:55.109994 | 2026-04-11 04:30:55.833536 | orchestrator | changed: .d..t...... ./ 2026-04-11 04:30:56.555281 | orchestrator | changed: .d..t...... ./ 2026-04-11 04:30:56.583038 | 2026-04-11 04:30:56.583231 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-11 04:30:56.621451 | orchestrator | skipping: Conditional result was False 2026-04-11 04:30:56.625560 | orchestrator | skipping: Conditional result was False 2026-04-11 04:30:56.649062 | 2026-04-11 04:30:56.649218 | PLAY RECAP 2026-04-11 04:30:56.649305 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-11 04:30:56.649350 | 2026-04-11 04:30:56.784808 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-11 04:30:56.787137 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-11 04:30:57.569934 | 2026-04-11 04:30:57.570101 | PLAY [Base post] 2026-04-11 04:30:57.591233 | 2026-04-11 04:30:57.591380 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-11 04:30:58.630925 | orchestrator | changed 2026-04-11 04:30:58.645504 | 2026-04-11 04:30:58.645806 | PLAY RECAP 2026-04-11 04:30:58.645910 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-11 04:30:58.646074 | 2026-04-11 04:30:58.804893 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-11 04:30:58.807474 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-11 04:30:59.688786 | 2026-04-11 04:30:59.688995 | PLAY [Base post-logs] 2026-04-11 04:30:59.700537 | 2026-04-11 04:30:59.700680 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-11 04:31:00.270148 | localhost | changed 2026-04-11 04:31:00.288255 | 2026-04-11 04:31:00.288457 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-11 04:31:00.327121 | localhost | ok 2026-04-11 04:31:00.334163 | 2026-04-11 04:31:00.334362 | TASK [Set zuul-log-path fact] 2026-04-11 04:31:00.355104 | localhost | ok 2026-04-11 04:31:00.380156 | 2026-04-11 04:31:00.380383 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-11 04:31:00.418800 | localhost | ok 2026-04-11 04:31:00.425066 | 2026-04-11 04:31:00.425246 | TASK [upload-logs : Create log directories] 2026-04-11 04:31:00.975750 | localhost | changed 2026-04-11 04:31:00.980398 | 2026-04-11 04:31:00.980550 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-11 04:31:01.494513 | localhost -> localhost | ok: Runtime: 0:00:00.007146 2026-04-11 04:31:01.503303 | 2026-04-11 04:31:01.503501 | TASK [upload-logs : Upload logs to log server] 2026-04-11 04:31:02.100138 | localhost | Output suppressed because no_log was given 2026-04-11 04:31:02.103911 | 2026-04-11 04:31:02.104093 | LOOP [upload-logs : Compress console log and json output] 2026-04-11 04:31:02.158027 | localhost | skipping: Conditional result was False 2026-04-11 04:31:02.162631 | localhost | skipping: Conditional result was False 2026-04-11 04:31:02.176043 | 2026-04-11 04:31:02.176350 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-11 04:31:02.223171 | localhost | skipping: Conditional result was False 2026-04-11 04:31:02.223894 | 2026-04-11 04:31:02.227224 | localhost | skipping: Conditional result was False 2026-04-11 04:31:02.239716 | 2026-04-11 04:31:02.239917 | LOOP [upload-logs : Upload console log and json output]